341 |
The segmentation problem in radiation therapyEngelbeen, Céline 30 June 2010 (has links)
The segmentation problem arises in the elaboration of a radiation therapy plan. After the cancer has been diagnosed and the radiation therapy sessions have been prescribed, the physician has to locate the tumor as well as the organs situated in the radiation field, called the organs at risk. The physician also has to determine the different dosage he wants to deliver in each of them and has to define a lower bound on the dosage for the tumor (which represents the minimum amount of radiation that is needed to have a sufficient control of the tumor) and an upper bound for each organ at risk (which represents the maximum amount of radiation that an organ can receive without damaging). Designing a radiation therapy plan that respects these different bounds of dosage is a complex optimization problem that is usually tackled in three steps. The segmentation problem is one of them.
Mathematically, the segmentation problem amounts to decomposing a given nonnegative integer matrix A into a nonnegative integer linear combination of some binary matrices. These matrices have to respect the consecutive ones property. In clinical applications several constraints may arise that reduce the set of binary matrices which respect the consecutive ones property that we can use. We study some of them, as the interleaf distance constraint, the interleaf motion constraint, the tongue-and-groove constraint and the minimum separation constraint.
We consider here different versions of the segmentation problem with different objective functions. Hence we deal with the beam-on time problem in order to minimize the total time during which the patient is irradiated. We study this problem under the interleaf distance and the interleaf motion constraints. We consider as well this last problem under the tongue-and-groove constraint in the binary case. We also take into account the cardinality and the lex-min problem. Finally, we present some results for the approximation problem.
/Le problème de segmentation intervient lors de l'élaboration d'un plan de radiothérapie. Après que le médecin ait localisé la tumeur ainsi que les organes se situant à proximité de celle-ci, il doit aussi déterminer les différents dosages qui devront être délivrés. Il détermine alors une borne inférieure sur le dosage que doit recevoir la tumeur afin d'en avoir un contrôle satisfaisant, et des bornes supérieures sur les dosages des différents organes situés dans le champ. Afin de respecter au mieux ces bornes, le plan de radiothérapie doit être préparé de manière minutieuse. Nous nous intéressons à l'une des étapes à réaliser lors de la détermination de ce plan: l'étape de segmentation.
Mathématiquement, cette étape consiste à décomposer une matrice entière et positive donnée en une combinaison positive entière linéaire de certaines matrices binaires. Ces matrices binaires doivent satisfaire la contrainte des uns consécutifs (cette contrainte impose que les uns de ces matrices soient regroupés en un seul bloc sur chaque ligne). Dans les applications cliniques, certaines contraintes supplémentaires peuvent restreindre l'ensemble des matrices binaires ayant les uns consécutifs (matrices 1C) que l'on peut utiliser. Nous en avons étudié certaines d'entre elles comme celle de la contrainte de chariots, la contrainte d'interdiciton de chevauchements, la contrainte tongue-and-groove et la contrainte de séparation minimum.
Le premier problème auquel nous nous intéressons est de trouver une décomposition de la matrice donnée qui minimise la somme des coefficients des matrices binaires. Nous avons développé des algorithmes polynomiaux qui résolvent ce problème sous la contrainte de chariots et/ou la contrainte d'interdiction de chevauchements. De plus, nous avons pu déterminer que, si la matrice donnée est une matrice binaire, on peut trouver en temps polynomial une telle décomposition sous la contrainte tongue-and-groove.
Afin de diminuer le temps de la séance de radiothérapie, il peut être désirable de minimiser le nombre de matrices 1C utilisées dans la décomposition (en ayant pris soin de préalablement minimiser la somme des coefficients ou non). Nous faisons une étude de ce problème dans différents cas particuliers (la matrice donnée n'est constituée que d'une colonne, ou d'une ligne, ou la plus grande entrée de celle-ci est bornée par une constante). Nous présentons de nouvelles bornes inférieures sur le nombre de matrices 1C ainsi que de nouvelles heuristiques.
Finalement, nous terminons par étudier le cas où l'ensemble des matrices 1C ne nous permet pas de décomposer exactement la matrice donnée. Le but est alors de touver une matrice décomposable qui soit aussi proche que possible de la matrice donnée. Après avoir examiné certains cas polynomiaux nous prouvons que le cas général est difficile à approximer avec une erreur additive de O(mn) où m et n représentent les dimensions de la matrice donnée.
|
342 |
Models and algorithms for network design problemsPoss, Michael 22 February 2011 (has links)
Dans cette thèse, nous étudions différents modèles, déterministes et stochastiques, pour les problèmes de dimensionnement de réseaux. Nous examinons également le problème du sac-à-dos stochastique ainsi que, plus généralement, les contraintes de capacité en probabilité.
Dans une première partie, nous nous consacrons à des modèles de dimensionnement de réseaux déterministes, possédant de nombreuses contraintes techniques s'approchant de situations réalistes. Nous commençons par étudier deux modèles de réseaux de télécommunications. Le premier considère des réseaux multi-couches et des capacités sur les arcs, tandis que le second étudie des réseaux mono-couche, sans capacité, où les commodités doivent être acheminées sur un nombre K de chemins disjoint de taille au plus L. Nous résolvons ces deux problèmes grâce à un algorithme de ``branch-and-cut' basé sur la décomposition de Benders de formulations linéaires pour ces problèmes. La nouveauté de notre approche se situe principalement dans l'étude empirique de la fréquence optimale de génération de coupes au cours de l'algorithme.
Nous étudions ensuite un problème d'expansion de réseaux de transmission électrique. Notre travail étudie différents modèles et formulations pour le problème, les comparant sur des réseaux brésiliens réels. En particulier, nous montrons que le re-dimensionnement permet des réductions de coût importantes.
Dans une seconde partie, nous examinons des modèles de programmation stochastique. Premièrement, nous prouvons que trois cas particuliers du problème de sac-à-dos avec recours simple peuvent être résolu par des algorithmes de programmation dynamique. Nous reformulons ensuite le problème comme un programme non-linéaire en variables entières et testons un algorithme ``branch-and-cut' basé l'approximation extérieure de la fonction objective.
Cet algorithme est ensuite transformé en un algorithme de ``branch-and-cut-and-price', utilisé pour résoudre un problème de dimensionnement de réseau stochastique avec recours simple.
Finalement, nous montrons comment linéariser des contraintes de capacité en probabilité avec variables binaires lorsque les coefficients sont des variables aléatoires satisfaisant certaines propriétés.
|
343 |
Methods for phylogenetic analysisKrig, Kåre January 2010 (has links)
In phylogenetic analysis one study the relationship between different species. By comparing DNA from two different species it is possible to get a numerical value representing the difference between the species. For a set of species, all pair-wise comparisons result in a dissimilarity matrix d. In this thesis I present a few methods for constructing a phylogenetic tree from d. The common denominator for these methods is that they do not generate a tree, but instead give a connected graph. The resulting graph will be a tree, in areas where the data perfectly matches a tree. When d does not perfectly match a tree, the resulting graph will instead show the different possible topologies, and how strong support they have from the data. Finally I have tested the methods both on real measured data and constructed test cases.
|
344 |
Application of Sputtering Technology on Preparing Visible-light Nano-sized Photocatalysts for the Decomposition of AcetoneWu, Yi-chen 05 September 2007 (has links)
This study investigated the decomposition efficiency of acetone using unmodified (pure TiO2) and modified TiO2 (TiO2/ITO¡BTiO2/N) prepared by sputtering technology. The influence of operating parameters including wavelength and relative humidity on the decomposition efficiency of acetone was further discussed. Operating parameters investigated included light wavelength (350~400, 435~500, and 506~600 nm), photocatalysts (TiO2/ITO, TiO2/N, and TiO2), and relative humidity (RH) (0%, 50%, and 100%).
In the experiments, acetone was degraded by photocatalysts in a self-designed batch photocatalytical reactor. Samples coated with TiO2 were placed in the batch reactor. The incident light with different wavelength was irradiated by a 20-watt lamp. Moreover, a low-pressure mercury lamp for UV light or LED lamps for blue and green lights were placed on the top of reactor. Acetone was injected into reactor by using a gasket syringe. Reactants and products were analyzed quantitatively by a gas chromatography with a flame ionization detector followed by a methaneizer (GC/FID-Methaneizer).
The structure of the photocatalyst film surface showed taper and the width of column ranged from 100 to 200 nm. The film structure showed crystallization cylindrical surface and the thickness of the photocatalyst film was in the range of 4.0-4.3 £gm. The highest decomposition efficiency of acetone was observed by using TiO2/ITO under visible-light with 50% RH. The synthesis of TiO2 was mainly anatase for the tested photocatalysts. AFM images showed that the photocatalyst surface appeared rugged and the shape showed a mountain ridge distribution .
Keywords: sputtering technology, modified photocatalysts, photosensitive, acetone, photocatalytic oxidation, acetone decomposition
|
345 |
Approches numérique multi-échelle/multi-modèle de la dégradation des matériaux compositesTouzeau, Josselyn 30 October 2012 (has links) (PDF)
Nos travaux concernent la mise en oeuvre d'une méthode multiéchelle pour faciliter la simulation numérique de structures complexes, appliquée à la modélisation de composants aéronautiques (notamment pour les pièces tournantes de turboréacteur et des structures composites stratifiées). Ces développements sont basés autour de la méthode Arlequin qui permet d'enrichir des modélisations numériques, à l'aide de patchs, autour de zones d'intérêt où des phénomènes complexes se produisent. Cette méthode est mise en oeuvre dans un cadre général permettant la superposition de maillages incompatibles au sein du code de calcul Z-set{Zébulon, en utilisant une formulation optimale des opérateurs de couplage. La précision et la robustesse de cette approche ont été évaluées sur différents problèmes numériques. Afin d'accroître les performances de la méthode Arlequin, un solveur spécifique basé sur les techniques de décomposition de domaine a été développé pour bénéficier des capacités de calcul offertes par les machines à architectures parallèles. Ces performances ont été évaluées sur différents cas tests académiques et quasi-industriels. Enfin, ces développements ont été appliqué à la simulation de problèmes de structures composites stratifiées.
|
346 |
Depth Map Compression Based on Platelet Coding and Quadratic Curve FittingWang, Han 26 October 2012 (has links)
Due to the fast development in 3D technology during recent decades, many approaches in 3D representation technologies have been proposed worldwide. In order to get an accurate information to render a 3D representation, more data need to be recorded compared to normal video sequence. In this case, how to find an efficient way to transmit the 3D representation data becomes an important part in the whole 3D representation technology. Recent years, many coding schemes based on the principle of encoding the depth have been proposed. Compared to the traditional multiview coding schemes, those new proposed schemes can achieve higher compression efficiency. Due to the development of depth capturing technology, the accuracy and quality of the reconstructed depth image also get improved. In this thesis we propose an efficient depth data compression scheme for 3D images. Our proposed depth data compression scheme is platelet based coding using Lagrangian optimization, quadtree decomposition and quadratic curve fitting. We study and improve the original platelet based coding scheme and achieve a compression improvement of 1-2 dB compared to the original platelet based scheme. The experimental results illustrate the improvement provided by our scheme. The quality of the reconstructed results of our proposed curve fitting based platelet coding scheme are better than that of the original scheme.
|
347 |
Boundary integral methods for Stokes flow : Quadrature techniques and fast Ewald methodsMarin, Oana January 2012 (has links)
Fluid phenomena dominated by viscous effects can, in many cases, be modeled by the Stokes equations. The boundary integral form of the Stokes equations reduces the number of degrees of freedom in a numerical discretization by reformulating the three-dimensional problem to two-dimensional integral equations to be discretized over the boundaries of the domain. Hence for the study of objects immersed in a fluid, such as drops or elastic/solid particles, integral equations are to be discretized over the surfaces of these objects only. As outer boundaries or confinements are added these must also be included in the formulation. An inherent difficulty in the numerical treatment of boundary integrals for Stokes flow is the integration of the singular fundamental solution of the Stokes equations, e.g. the so called Stokeslet. To alleviate this problem we developed a set of high-order quadrature rules for the numerical integration of the Stokeslet over a flat surface. Such a quadrature rule was first designed for singularities of the type <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?1/%7C%5Cmathbf%7Bx%7D%7C" />. To assess the convergence properties of this quadrature rule a theoretical analysis has been performed. The slightly more complicated singularity of the Stokeslet required certain modifications of the integration rule developed for <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?1/%7C%5Cmathbf%7Bx%7D%7C" />. An extension of this type of quadrature rule to a cylindrical surface is also developed. These quadrature rules are tested also on physical problems that have an analytic solution in the literature. Another difficulty associated with boundary integral problems is introduced by periodic boundary conditions. For a set of particles in a periodic domain periodicity is imposed by requiring that the motion of each particle has an added contribution from all periodic images of all particles all the way up to infinity. This leads to an infinite sum which is not absolutely convergent, and an additional physical constraint which removes the divergence needs to be imposed. The sum is decomposed into two fast converging sums, one that handles the short range interactions in real space and the other that sums up the long range interactions in Fourier space. Such decompositions are already available in the literature for kernels that are commonly used in boundary integral formulations. Here a decomposition in faster decaying sums than the ones present in the literature is derived for the periodic kernel of the stress tensor. However the computational complexity of the sums, regardless of the decomposition they stem from, is <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D(N%5E%7B2%7D)" />. This complexity can be lowered using a fast summation method as we introduced here for simulating a sedimenting fiber suspension. The fast summation method was initially designed for point particles, which could be used for fibers discretized numerically almost without any changes. However, when two fibers are very close to each other, analytical integration is used to eliminate numerical inaccuracies due to the nearly singular behavior of the kernel and the real space part in the fast summation method was modified to allow for this analytical treatment. The method we have developed for sedimenting fiber suspensions allows for simulations in large periodic domains and we have performed a set of such simulations at a larger scale (larger domain/more fibers) than previously feasible. / <p>QC 20121122</p>
|
348 |
Arthropod successionin Whitehorse, Yukon Territory and compared development of protophormia terraenovae (R. -D.) from Beringia and the Great Lakes RegionBygarski, Katherine 01 July 2012 (has links)
Forensic medicocriminal entomology is used in the estimation of post-mortem intervals in death investigations, by means of arthropod succession patterns and the development rates of individual insect species. The purpose of this research was to determine arthropod succession patterns in Whitehorse, Yukon Territory, and compare the development rates of the dominant blowfly species (Protophormia terraenovae R.-D.) to another population collected in Oshawa, Ontario. Decomposition in Whitehorse occurred at a much slower rate than is expected for the summer season, and the singularly dominant blowfly species is not considered dominant or a primary colonizer in more southern regions. Development rates of P. terraenovae were determined for both fluctuating and two constant temperatures. Under natural fluctuating conditions, there was no significant difference in growth rate between studied biotypes. Results at repeated 10°C conditions varied, though neither biotype completed development indicating the published minimum development thresholds for this species are underestimated. / UOIT
|
349 |
Energy, exergy and cost analyses of nuclear-based hydrogen production via thermochemical water decomposition using a copper-chlorine (Cu-CI) cycleOrhan, Mehmet Fatih 01 April 2008 (has links)
In this thesis the Copper-Chlorine (Cu-CI) thermochemical cycle and its components as well as operational and environmental conditions are defined, and a comprehensive thermodynamic analysis of a Cu-CI thermochemical cycle, including the relevant chemical reactions, is performed. Also the performance of each component/process is evaluated through energy and exergy efficiencies. Various parametric studies on energetic and exergetic aspects with variable reaction and reference-environment temperatures are carried out. A detailed analysis of the general methodology of cost estimation for the proposed process, including all cost items with their percentages, the factors that affect accuracy, and a scaling method, is also presented. / UOIT
|
350 |
An Economic Analysis of School and Labor Market Outcomes For At-Risk YouthKagaruki-Kakoti, Generosa 12 May 2005 (has links)
Federal education policy has targeted children who are disadvantaged in order to improve their academic performance. The most recent federal education policy is the No Child Left Behind law signed by President Bush in 2001. Indicators often used to identify an at-risk youth range from economic, personal, family, and neighborhood characteristics. A probit model is used in this study to estimate the probability that a student graduates from high school as a function of 8th grade variables. Students are classified as at-risk of dropping out of high school or non at-risk based on having one or more risk factor. The main measures of academic outcomes are high school completion and post-secondary academic achievements. The main measures of labor market outcomes are short-term and long-term earnings. The results show that a student who comes from a low income family, has a sibling who dropped out, has parents with low education, is home alone after school for three hours or more, or comes from a step family in the eighth grade is at-risk of dropping out of high school. At-risk students are less likely than non at-risk students to graduate from high school. They appear to be more sensitive to existing conditions that may impair/assist their academic progress while they are in high school. At-risk students are also less likely to select a bachelor’s degree. When they are compared to comparable non at-risk students, a greater percentage of at-risk students select a bachelor’s degree or post-graduate degrees than non at-risk students. At-risk individuals face long-term disadvantage in the labor market, receiving lower wage offers than the non at-risk group. Comparing only those without post secondary education shows that the average earnings offered to at-risk individuals were lower than those offered to non at-risk individuals. At-risk college graduates also receive lower earnings than non at-risk college graduates. The wage differential is largely due to the disadvantage at-risk individuals face in the labor market.
|
Page generated in 0.0317 seconds