• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 2
  • Tagged with
  • 20
  • 20
  • 20
  • 20
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development of new irradiation techniques using gimbaled x-ray head / ジンバルX線ヘッドを用いた新規照射法の開発

Ono, Tomohiro 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第19553号 / 医博第4060号 / 新制||医||1012(附属図書館) / 32589 / 京都大学大学院医学研究科医学専攻 / (主査)教授 増永 慎一郎, 教授 富樫 かおり, 教授 武田 俊一 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
12

Dosimetric evaluation of the Acuros XB algorithm for a 4 MV photon beam in head and neck intensity-modulated radiation therapy. / 4MV-X線を用いた頭頸部強度変調放射線治療におけるAcuros XBアルゴリズムの物理的・臨床的線量評価

Hirata, Kimiko 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第20248号 / 医博第4207号 / 新制||医||1020(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 鈴木 実, 教授 別所 和久, 教授 大森 孝一 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
13

Utilizing Problem Structure in Optimization of Radiation Therapy

Carlsson, Fredrik January 2008 (has links)
In this thesis, optimization approaches for intensity-modulated radiation therapy are developed and evaluated with focus on numerical efficiency and treatment delivery aspects. The first two papers deal with strategies for solving fluence map optimization problems efficiently while avoiding solutions with jagged fluence profiles. The last two papers concern optimization of step-and-shoot parameters with emphasis on generating treatment plans that can be delivered efficiently and accurately. In the first paper, the problem dimension of a fluence map optimization problem is reduced through a spectral decomposition of the Hessian of the objective function. The weights of the eigenvectors corresponding to the p largest eigenvalues are introduced as optimization variables, and the impact on the solution of varying p is studied. Including only a few eigenvector weights results in faster initial decrease of the objective value, but with an inferior solution, compared to optimization of the bixel weights. An approach combining eigenvector weights and bixel weights produces improved solutions, but at the expense of the pre-computational time for the spectral decomposition. So-called iterative regularization is performed on fluence map optimization problems in the second paper. The idea is to find regular solutions by utilizing an optimization method that is able to find near-optimal solutions with non-jagged fluence profiles in few iterations. The suitability of a quasi-Newton sequential quadratic programming method is demonstrated by comparing the treatment quality of deliverable step-and-shoot plans, generated through leaf sequencing with a fixed number of segments, for different number of bixel-weight iterations. A conclusion is that over-optimization of the fluence map optimization problem prior to leaf sequencing should be avoided. An approach for dynamically generating multileaf collimator segments using a column generation approach combined with optimization of segment shapes and weights is presented in the third paper. Numerical results demonstrate that the adjustment of leaf positions improves the plan quality and that satisfactory treatment plans are found with few segments. The method provides a tool for exploring the trade-off between plan quality and treatment complexity by generating a sequence of deliverable plans of increasing quality. The final paper is devoted to understanding the ability of the column generation approach in the third paper to find near-optimal solutions with very few columns compared to the problem dimension. The impact of different restrictions on the generated columns is studied, both in terms of numerical behaviour and convergence properties. A bound on the two-norm of the columns results in the conjugate-gradient method. Numerical results indicate that the appealing properties of the conjugate-gradient method on ill-conditioned problems are inherited in the column generation approach of the third paper. / QC 20100709
14

Cancer treatment optimization

Cha, Kyungduck 01 April 2008 (has links)
This dissertation investigates optimization approaches applied to radiation therapy in cancer treatment. Since cancerous cells are surrounded by critical organs and normal tissues, there is conflicting objectives in the treatment design of providing sufficient radiation dose to tumor region, while avoiding normal healthy cells. In general, the goal of radiation therapy is to conform the spatial distribution of the prescribed dose to the tumor volume while minimizing the dose to the surrounding normal structures. A recent advanced technology, using multi-leaf collimator integrated into linear accelerator, provides much better opportunities to achieve this goal: the radiotherapy based on non-uniform radiation beams intensities is called Intensity-Modulated Radiation Therapy. My dissertation research offers a quadratic mixed integer programming approach to determine optimal beam orientations and beamlets intensity simultaneously. The problems generated from real patient cases are large-scale dense instances due to the physics of dose contributions from beamlets to volume elements. The research highlights computational techniques to improve solution times for these intractable instances. Furthermore, results from this research will provide plans that are clinically acceptable and superior in plan quality, thus directly improve the curity rate and lower the normal tissue complication for cancer patients.
15

Exact minimisation of treatment time for the delivery of intensity modulated radiation therapy

Wake, Giulia M. G. H. January 2009 (has links)
This thesis investigates the exact minimisation of treatment delivery time for Intensity Modulated Radiation Therapy (IMRT) for the treatment of cancer using Multileaf Collimators (MLC). Although patients are required to remain stationary during the delivery of IMRT, inevitably some patient movement will occur, particularly if treatment times are longer than necessary. Therefore minimising the treatment delivery time of IMRT may result in less patient movement, less inaccuracy in the dosage received and a potentially improved outcome for the patient. When IMRT is delivered using multileaf collimators in 'step and shoot' mode, it consists of a sequence of multileaf collimator configurations, or shape matrices; for each, time is needed to set up the configuration, and in addition the patient is exposed to radiation for a specified time, or beam-on time. The 'step and shoot leaf sequencing' problems for minimising treatment time considered in this thesis are the constant set-up time Total Treatment Time (TTT) problem and the Beam-on Time Constrained Minimum Cardinality (BTCMC) problem. The TTT problem minimises a weighted sum of total beam-on time and total number of shape matrices used, whereas the BTCMC problem lexicographically minimises the total beam-on time then the number of shape matrices used in a solution. The vast majority of approaches to these strongly NP-hard problems are heuristics; of the few exact approaches, the formulations either have excessive computation times or their solution methods do not easily incorporate multileaf collimator mechanical constraints (which are present in most currently used MLC systems). In this thesis, new exact mixed integer and integer programming formulations for solving the TTT and BTCMC problems are developed. The models and solution methods considered can be applied to the unconstrained and constrained versions of the problems, where 'constrained' refers to the modelling of additional MLC mechanical constraints. Within the context of integer programming formulations, new and existing methods for improving the computational efficiency of the models presented are investigated. Numerical results for all variations considered are provided. This thesis demonstrates that significant computational improvement can be achieved for the exact mixed integer and integer programming models investigated, via solution approaches based on an idea of systematically 'stepping-up' through the number of shape matrices used in a formulation, via additional constraints (particularly symmetry breaking constraints) and via the application of improved bounds on variables. This thesis also makes a contribution to the wider field of integer programming through the examination of an interesting substructure of an exact integer programming model. In summary, this thesis presents a thorough analysis of possible integer programming models for the strongly NP-hard 'step and shoot' leaf sequencing problems and investigates and applies methods for improving the computational efficiency of such formulations. In this way, this thesis contributes to the field of leaf sequencing for the application of Intensity Modulated Radiation Therapy using Multileaf Collimators.
16

Multicriteria optimization for managing tradeoffs in radiation therapy treatment planning

Bokrantz, Rasmus January 2013 (has links)
Treatment planning for radiation therapy inherently involves tradeoffs, such as between tumor control and normal tissue sparing, between time-efficiency and dose quality, and between nominal plan quality and robustness. The purpose of this thesis is to develop methods that can facilitate decision making related to such tradeoffs. The main focus of the thesis is on multicriteria optimization methods where a representative set of treatment plans are first calculated and the most appropriate plan contained in this representation then selected by the treatment planner through continuous interpolation between the precalculated alternatives. These alternatives constitute a subset of the set of Pareto optimal plans, meaning plans such that no criterion can be improved without a sacrifice in another. Approximation of Pareto optimal sets is first studied with respect to fluence map optimization for intensity-modulated radiation therapy. The approximation error of a discrete representation is minimized by calculation of points one at the time at the location where the distance between an inner and outer approximation of the Pareto set currently attains its maximum. A technique for calculating this distance that is orders of magnitude more efficient than the best previous method is presented. A generalization to distributed computational environments is also proposed. Approximation of Pareto optimal sets is also considered with respect to direct machine parameter optimization. Optimization of this form is used to calculate representations where any interpolated treatment plan is directly deliverable. The fact that finite representations of Pareto optimal sets have approximation errors with respect to Pareto optimality is addressed by a technique that removes these errors by a projection onto the exact Pareto set. Projections are also studied subject to constraints that prevent the dose-volume histogram from deteriorating. Multicriteria optimization is extended to treatment planning for volumetric-modulated arc therapy and intensity-modulated proton therapy. Proton therapy plans that are robust against geometric errors are calculated by optimization of the worst case outcome. The theory for multicriteria optimization is extended to accommodate this formulation. Worst case optimization is shown to be preferable to a previous more conservative method that also protects against uncertainties which cannot be realized in practice. / En viktig aspekt av planering av strålterapibehandlingar är avvägningar mellan behandlingsmål vilka står i konflikt med varandra. Exempel på sådana avvägningar är mellan tumörkontroll och dos till omkringliggande frisk vävnad, mellan behandlingstid och doskvalitet, och mellan nominell plankvalitet och robusthet med avseende på geometriska fel. Denna avhandling syftar till att utveckla metoder som kan underlätta beslutsfattande kring motstridiga behandlingsmål. Primärt studeras en metod för flermålsoptimering där behandlingsplanen väljs genom kontinuerlig interpolation över ett representativt urval av förberäknade alternativ. De förberäknade behandlingsplanerna utgör en delmängd av de Paretooptimala planerna, det vill säga de planer sådana att en förbättring enligt ett kriterium inte kan ske annat än genom en försämring enligt ett annat. Beräkning av en approximativ representation av mängden av Paretooptimala planer studeras först med avseende på fluensoptimering för intensitetsmodulerad strålterapi. Felet för den approximativa representationen minimeras genom att innesluta mängden av Paretooptimala planer mellan inre och yttre approximationer. Dessa approximationer förfinas iterativt genom att varje ny plan genereras där avståndet mellan approximationerna för tillfället är som störst. En teknik för att beräkna det maximala avståndet mellan approximationerna föreslås vilken är flera storleksordningar snabbare än den bästa tidigare kända metoden. En generalisering till distribuerade beräkningsmiljöer föreslås även. Approximation av mängden av Paretooptimala planer studeras även för direkt maskinparameteroptimering, som används för att beräkna representationer där varje interpolerad behandlingsplan är direkt levererbar. Det faktum att en ändlig representation av mängden av Paretooptimala lösningar har ett approximationsfel till Paretooptimalitet hanteras via en metod där en interpolerad behandlingsplan projiceras på Paretomängden. Projektioner studeras även under bivillkor som förhindrar att den interpolerade planens dos-volym histogram kan försämras. Flermålsoptimering utökas till planering av rotationsterapi och intensitetsmodulerad protonterapi. Protonplaner som är robusta mot geometriska fel beräknas genom optimering med avseende på det värsta möjliga utfallet av de föreliggande osäkerheterna. Flermålsoptimering utökas även teoretiskt till att innefatta denna formulering. Nyttan av värsta fallet-optimering jämfört med tidigare mer konservativa metoder som även skyddar mot osäkerheter som inte kan realiseras i praktiken demonstreras experimentellt. / <p>QC 20130527</p>
17

An evaluation of patient-specific IMRT verification failures

Crawford, Jason 10 September 2010 (has links)
At the BC Cancer Agency (BCCA), Vancouver Island Centre (VIC), the clinical verification of Intensity Modulated Radiation Therapy (IMRT) treatment plans involves comparing Portal Image (PI) -based three-dimensionally reconstructed (EPIDose) dose distributions to planned doses calculated using the Pencil Beam Convolution (PBC) algorithm. Discrepancies surpassing established action levels constitute failure. Since 2007, the failure rate of IMRT verification process had been increasing, reaching as high as 18.5% in 2009. A retrospective evaluation of clinical IMRT verification failures was conducted to identify causes and possible resolutions. Thirty clinical verification failures were identified. An equipment malfunction was discovered and subsequently repaired, and several failures were resolved in the process. Statistical uncertainty in measurement outcome was small in comparison to action levels and not considered significant to the production of failures. Still, over 50% of the redelivered plans were shown to consistently fail. A subgroup of consistent verification plans were compared to ion chamber point dose measurements. Relative to ion chamber measurements, EPIDose underestimated the dose while the dose calculation algorithm (PBC, Eclipse version 8.1.18) overestimated the same point dose. Comparisons of individual fields demonstrated that none were identifiably problematic; dose discrepancies were the result of minor but accumulating dose differences. Consistent verification failures were recalculated using two advanced dose calculation engines (the Anisotropic Analytical Algorithm and Monte Carlo). In general, verification metrics improved, and all failures were resolved. Three distinct indices of fluence modulation (or complexity) were shown to correlate with verification metrics. This indicated that deficiencies in both the leaf motion calculator and the PBC (Eclipse version 8.1.18) had likely contributed to the production of failures. In conclusion, clinical verification failures were resolved retrospectively by replacing faulty equipment and using more advanced methods of planned dose calculation, supporting the efficacy and continued use of PI-based three dimensional dose reconstruction for IMRT verification.
18

Développement d'un nouveau critère pour déterminer les limites d'utilisation des détecteurs en dosimétrie non standard

Kamio, Yuji 12 1900 (has links)
Depuis quelques années, il y a un intérêt de la communauté en dosimétrie d'actualiser les protocoles de dosimétrie des faisceaux larges tels que le TG-51 (AAPM) et le TRS-398 (IAEA) aux champs non standard qui requièrent un facteur de correction additionnel. Or, ces facteurs de correction sont difficiles à déterminer précisément dans un temps acceptable. Pour les petits champs, ces facteurs augmentent rapidement avec la taille de champ tandis que pour les champs d'IMRT, les incertitudes de positionnement du détecteur rendent une correction cas par cas impraticable. Dans cette étude, un critère théorique basé sur la fonction de réponse dosimétrique des détecteurs est développé pour déterminer dans quelles situations les dosimètres peuvent être utilisés sans correction. Les réponses de quatre chambres à ionisation, d'une chambre liquide, d'un détecteur au diamant, d'une diode, d'un détecteur à l'alanine et d'un détecteur à scintillation sont caractérisées à 6 MV et 25 MV. Plusieurs stratégies sont également suggérées pour diminuer/éliminer les facteurs de correction telles que de rapporter la dose absorbée à un volume et de modifier les matériaux non sensibles du détecteur pour pallier l'effet de densité massique. Une nouvelle méthode de compensation de la densité basée sur une fonction de perturbation est présentée. Finalement, les résultats démontrent que le détecteur à scintillation peut mesurer les champs non standard utilisés en clinique avec une correction inférieure à 1%. / In recent years, the radiation dosimetry community has shown a keen interest in extending broad beam dosimetry protocols such as AAPM's TG-51 and IAEA's TRS-398 to nonstandard fields which involve the use of an additional correction factor. Yet, these correction factors are difficult to determine precisely in a time frame that is acceptable. For small fields, these factors increase rapidly with field size, whereas for composite IMRT fields, detector positioning uncertainties render a case-by-case correction impractical. In this study, a theoretical criterion based on radiation detectors' dose response functions is used to determine in which situations a given dosimeter can be used without correction. The responses of four ionization chambers, a liquid-filled chamber, a diamond detector, an unshieded diode, an alanine dosimeter and a plastic scintillator detector are characterized at 6 MV and 25 MV. Several strategies are also suggested to reduce/eliminate correction factors such as reporting the absorbed dose to a volume and modifying the non-sensitive components of a detector to compensate for mass density effects. A new method of density compensation based on a perturbation function is presented. Finally, results show that the scintillator detector can measure nonstandard fields used in the clinic with corrections under 1%.
19

Développement d'un nouveau critère pour déterminer les limites d'utilisation des détecteurs en dosimétrie non standard

Kamio, Yuji 12 1900 (has links)
Depuis quelques années, il y a un intérêt de la communauté en dosimétrie d'actualiser les protocoles de dosimétrie des faisceaux larges tels que le TG-51 (AAPM) et le TRS-398 (IAEA) aux champs non standard qui requièrent un facteur de correction additionnel. Or, ces facteurs de correction sont difficiles à déterminer précisément dans un temps acceptable. Pour les petits champs, ces facteurs augmentent rapidement avec la taille de champ tandis que pour les champs d'IMRT, les incertitudes de positionnement du détecteur rendent une correction cas par cas impraticable. Dans cette étude, un critère théorique basé sur la fonction de réponse dosimétrique des détecteurs est développé pour déterminer dans quelles situations les dosimètres peuvent être utilisés sans correction. Les réponses de quatre chambres à ionisation, d'une chambre liquide, d'un détecteur au diamant, d'une diode, d'un détecteur à l'alanine et d'un détecteur à scintillation sont caractérisées à 6 MV et 25 MV. Plusieurs stratégies sont également suggérées pour diminuer/éliminer les facteurs de correction telles que de rapporter la dose absorbée à un volume et de modifier les matériaux non sensibles du détecteur pour pallier l'effet de densité massique. Une nouvelle méthode de compensation de la densité basée sur une fonction de perturbation est présentée. Finalement, les résultats démontrent que le détecteur à scintillation peut mesurer les champs non standard utilisés en clinique avec une correction inférieure à 1%. / In recent years, the radiation dosimetry community has shown a keen interest in extending broad beam dosimetry protocols such as AAPM's TG-51 and IAEA's TRS-398 to nonstandard fields which involve the use of an additional correction factor. Yet, these correction factors are difficult to determine precisely in a time frame that is acceptable. For small fields, these factors increase rapidly with field size, whereas for composite IMRT fields, detector positioning uncertainties render a case-by-case correction impractical. In this study, a theoretical criterion based on radiation detectors' dose response functions is used to determine in which situations a given dosimeter can be used without correction. The responses of four ionization chambers, a liquid-filled chamber, a diamond detector, an unshieded diode, an alanine dosimeter and a plastic scintillator detector are characterized at 6 MV and 25 MV. Several strategies are also suggested to reduce/eliminate correction factors such as reporting the absorbed dose to a volume and modifying the non-sensitive components of a detector to compensate for mass density effects. A new method of density compensation based on a perturbation function is presented. Finally, results show that the scintillator detector can measure nonstandard fields used in the clinic with corrections under 1%.
20

Direct optimization of dose-volume histogram metrics in intensity modulated radiation therapy treatment planning / Direkt optimering av dos-volym histogram-mått i intensitetsmodulerad strålterapiplanering

Zhang, Tianfang January 2018 (has links)
In optimization of intensity-modulated radiation therapy treatment plans, dose-volumehistogram (DVH) functions are often used as objective functions to minimize the violationof dose-volume criteria. Neither DVH functions nor dose-volume criteria, however,are ideal for gradient-based optimization as the former are not continuously differentiableand the latter are discontinuous functions of dose, apart from both beingnonconvex. In particular, DVH functions often work poorly when used in constraintsdue to their being identically zero when feasible and having vanishing gradients on theboundary of feasibility.In this work, we present a general mathematical framework allowing for direct optimizationon all DVH-based metrics. By regarding voxel doses as sample realizations ofan auxiliary random variable and using kernel density estimation to obtain explicit formulas,one arrives at formulations of volume-at-dose and dose-at-volume which are infinitelydifferentiable functions of dose. This is extended to DVH functions and so calledvolume-based DVH functions, as well as to min/max-dose functions and mean-tail-dosefunctions. Explicit expressions for evaluation of function values and corresponding gradientsare presented. The proposed framework has the advantages of depending on onlyone smoothness parameter, of approximation errors to conventional counterparts beingnegligible for practical purposes, and of a general consistency between derived functions.Numerical tests, which were performed for illustrative purposes, show that smoothdose-at-volume works better than quadratic penalties when used in constraints and thatsmooth DVH functions in certain cases have significant advantage over conventionalsuch. The results of this work have been successfully applied to lexicographic optimizationin a fluence map optimization setting. / Vid optimering av behandlingsplaner i intensitetsmodulerad strålterapi används dosvolym- histogram-funktioner (DVH-funktioner) ofta som målfunktioner för att minimera avståndet till dos-volymkriterier. Varken DVH-funktioner eller dos-volymkriterier är emellertid idealiska för gradientbaserad optimering då de förstnämnda inte är kontinuerligt deriverbara och de sistnämnda är diskontinuerliga funktioner av dos, samtidigt som båda också är ickekonvexa. Speciellt fungerar DVH-funktioner ofta dåligt i bivillkor då de är identiskt noll i tillåtna områden och har försvinnande gradienter på randen till tillåtenhet. I detta arbete presenteras ett generellt matematiskt ramverk som möjliggör direkt optimering på samtliga DVH-baserade mått. Genom att betrakta voxeldoser som stickprovsutfall från en stokastisk hjälpvariabel och använda ickeparametrisk densitetsskattning för att få explicita formler, kan måtten volume-at-dose och dose-at-volume formuleras som oändligt deriverbara funktioner av dos. Detta utökas till DVH-funktioner och så kallade volymbaserade DVH-funktioner, såväl som till mindos- och maxdosfunktioner och medelsvansdos-funktioner. Explicita uttryck för evaluering av funktionsvärden och tillhörande gradienter presenteras. Det föreslagna ramverket har fördelarna av att bero på endast en mjukhetsparameter, av att approximationsfelen till konventionella motsvarigheter är försumbara i praktiska sammanhang, och av en allmän konsistens mellan härledda funktioner. Numeriska tester genomförda i illustrativt syfte visar att slät dose-at-volume fungerar bättre än kvadratiska straff i bivillkor och att släta DVH-funktioner i vissa fall har betydlig fördel över konventionella sådana. Resultaten av detta arbete har med framgång applicerats på lexikografisk optimering inom fluensoptimering.

Page generated in 0.5659 seconds