• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2059
  • 1534
  • 209
  • 50
  • 20
  • 19
  • 18
  • 9
  • 9
  • 7
  • 6
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 4400
  • 1627
  • 750
  • 742
  • 555
  • 495
  • 442
  • 418
  • 393
  • 321
  • 316
  • 288
  • 288
  • 275
  • 274
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Supply chain design and distribution planning under supply uncertainty : Application to bulk liquid gas distribution / Optimisation de chaine logistique et planning de distribution sous incertitude d’approvisionnement

Dubedout, Hugues 03 June 2013 (has links)
La distribution de liquide cryogénique en « vrac », ou par camions citernes, est un cas particulier des problèmes d’optimisation logistique. Ces problèmes d’optimisation de chaines logistiques et/ou de transport sont habituellement traités sous l’hypothèse que les données sont connues à l’avance et certaines. Or, la majorité des problèmes d’optimisation industriels se placent dans un contexte incertain. Mes travaux de recherche s’intéressent aussi bien aux méthodes d’optimisation robuste que stochastiques.Mes travaux portent sur deux problèmes distincts. Le premier est un problème de tournées de véhicules avec gestion des stocks. Je propose une méthodologie basée sur les méthodes d’optimisation robuste, représentant les pannes par des scénarios. Je montre qu’il est possible de trouver des solutions qui réduisent de manière significative l’impact des pannes d’usine sur la distribution. Je montre aussi comment la méthode proposée peut aussi être appliquée à la version déterministe du problème en utilisant la méthode GRASP, et ainsi améliorer significativement les résultats obtenu par l’algorithme en place. Le deuxième problème étudié concerne la planification de la production et d’affectation les clients. Je modélise ce problème à l’aide de la technique d’optimisation stochastique avec recours. Le problème maître prend les décisions avant qu’une panne ce produise, tandis que les problèmes esclaves optimisent le retour à la normale après la panne. Le but est de minimiser le coût de la chaîne logistique. Les résultats présentés contiennent non seulement la solution optimale au problème stochastique, mais aussi des indicateurs clés de performance. Je montre qu’il est possible de trouver des solutions ou les pannes n’ont qu’un impact mineur. / The distribution of liquid gazes (or cryogenic liquids) using bulks and tractors is a particular aspect of a fret distribution supply chain. Traditionally, these optimisation problems are treated under certainty assumptions. However, a large part of real world optimisation problems are subject to significant uncertainties due to noisy, approximated or unknown objective functions, data and/or environment parameters. In this research we investigate both robust and stochastic solutions. We study both an inventory routing problem (IRP) and a production planning and customer allocation problem. Thus, we present a robust methodology with an advanced scenario generation methodology. We show that with minimal cost increase, we can significantly reduce the impact of the outage on the supply chain. We also show how the solution generation used in this method can also be applied to the deterministic version of the problem to create an efficient GRASP and significantly improve the results of the existing algorithm. The production planning and customer allocation problem aims at making tactical decisions over a longer time horizon. We propose a single-period, two-stage stochastic model, where the first stage decisions represent the initial decisions taken for the entire period, and the second stage representing the recovery decision taken after an outage. We aim at making a tool that can be used both for decision making and supply chain analysis. Therefore, we not only present the optimized solution, but also key performance indicators. We show on multiple real-life test cases that it isoften possible to find solutions where a plant outage has only a minimal impact.
132

Optimisation and Bayesian optimality

Joyce, Thomas January 2016 (has links)
This doctoral thesis will present the results of work into optimisation algorithms. We first give a detailed exploration of the problems involved in comparing optimisation algorithms. In particular we provide extensions and refinements to no free lunch results, exploring algorithms with arbitrary stopping conditions, optimisation under restricted metrics, parallel computing and free lunches, and head-to-head minimax behaviour. We also characterise no free lunch results in terms of order statistics. We then ask what really constitutes understanding of an optimisation algorithm. We argue that one central part of understanding an optimiser is knowing its Bayesian prior and cost function. We then pursue a general Bayesian framing of optimisation, and prove that this Bayesian perspective is applicable to all optimisers, and that even seemingly non-Bayesian optimisers can be understood in this way. Specifically we prove that arbitrary optimisation algorithms can be represented as a prior and a cost function. We examine the relationship between the Kolmogorov complexity of the optimiser and the Kolmogorov complexity of it’s corresponding prior. We also extended our results from deterministic optimisers to stochastic optimisers and forgetful optimisers, and we show that uniform randomly selecting a prior is not equivalent to uniform randomly selecting an optimisation behaviour. Lastly we consider what the best way to go about gaining a Bayesian understanding of real optimisation algorithms is. We use the developed Bayesian framework to explore the affects of some common approaches to constructing meta-heuristic optimisation algorithms, such as on-line parameter adaptation. We conclude by exploring an approach to uncovering the probabilistic beliefs of optimisers with a “shattering” method.
133

Techniques of design optimisation for algorithms implemented in software

Hopson, Benjamin Thomas Ken January 2016 (has links)
The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis.
134

Optimisation des applications de traitement systématique intensives sur Systems-on-Chip / Optimizations for systematic and intensive signal processing applications on Systems-on-Chip

Glitia, Calin 23 November 2009 (has links)
Les applications de traitement intensif de signal apparaissent dans de nombreux domaines d'applications tels que multimédia ou systèmes de détection. Ces applications gèrent les structures de données multidimensionnelles (principalement des tableaux) pour traiter les différentes dimensions des données (espace, temps, fréquence). Un langage de spécification permettant l'utilisation directe de ces différentes dimensions avec un haut niveau d'abstraction est une des clés de la manipulation de la complexité de ces applications et permet de bénéficier de leur parallélisme potentiel. Le langage de spécification Array-OL est conçu pour faire exactement cela. Dans cette thèse, nous introduisons une extension d'Array-OL pour exprimer des dépendances cycliques par des dépendances interrépétitions uniformes. Nous montrons que ce langage de spécification est capable d'exprimer les principaux motifs de calcul du domaine de traitement de signal intensif. Nous discutons aussi de la modélisation répétitive des applications parallèles, des architectures répétitives et les placements uniformes des premières sur les secondes, en utilisant les concepts Array-OL intégrés dans le profil UML MARTE (Modeling and Analysis of Real-time and Embedded systems). Des transformations de haut niveau data-parallèles sont disponibles pour adapter l'application à l'exécution, ce qui permet de choisir la granularité des flots et une simple expression du placement en étiquetant chaque répétition par son mode d'exécution: data parallèle ou séquentiel. L'ensemble des transformations a été revu, étendu et implémenté dans le cadre de l'environnement de comodélisation pour les systèmes embarqués, Gaspard2. Avec l'introduction des dépendances uniformes, notre intérêt s'est tourné aussi sur l'interaction entre ces dépendances et les transformations de haut niveau. C'est essentiel, afin de permettre l'utilisation des outils de refactoring sur les modèles avec dépendances uniformes. En utilisant les outils de refactoring de haut niveau, des stratégies et des heuristiques peuvent être conçues pour aider à l'exploration de l'espace de conception. Nous proposons une stratégie qui permet de trouver de bons compromis entre l'usage de stockage et de ressources de calcul, et dans l'exploitation de parallélisme (à la fois de tâches et de données), stratégie illustrée sur une application industrielle radar. / Intensive signal processing applications appear in many application domains such as video processing or detection systems. These applications handle multidimensional data structures (mainly arrays) to deal with the various dimensions of the data (space, time, frequency). A specification language allowing the direct manipulation of these different dimensions with a high level of abstraction is a key to handling the complexity of these applications and to benefit from their massive potential parallelism. The Array-OL specification language is designed to do just that. In this thesis, we introduce an extension of Array-OL to express cycle dependences by the way of uniform inter-repetition dependences. We show that this specification language is able to express the main patterns of computation of the intensive signal processing domain. We discuss also the repetitive modeling of parallel applications, repetitive architectures and uniform mappings of the former to the latter, using the Array-OL concepts integrated into the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile. High-level data-parallel transformations are available to adapt the application to the execution, allowing to choose the granularity of the flows and a simple expression of the mapping by tagging each repetition by its execution mode: data-parallel or sequential. The whole set of transformations was reviewed, extended and implemented as a part of the Gaspard2 co-design environment for embedded systems. With the introduction of the uniform dependences into the specification, our interest turns also on the interaction between these dependences and the high-level transformations. This is essential in order to enable the usage of the refactoring tools on the models with uniform dependences. Based on the high-level refactoring tools, strategies and heuristics can be designed to help explore the design space. We propose a strategy that allows to find good trade-offs in the usage of storage and computation resources, and in the parallelism (both task and data parallelism) exploitation, strategy illustrated on an industrial radar application.
135

Quality-of-service-based approach for dimensioning and optimisation of mobile cellular networks

Kourtis, Stamatis January 2002 (has links)
Next generation high performance systems are being standardised assuming a generic service delivery paradigm capable of supporting a diversity of circuit and importantly packet services. However, this flexibility comes at a cost which is the increased complexity of the dimensioning, planning, optimisation and QoS provisioning with respect to previous generation single-service mobile systems. Accurate system dimensioning is of fundamental importance and this thesis explores this requirement at two levels. Firstly, it departs from the common assumption of static users and examines what is the impact of mobile users on the system capacity. Secondly, it examines the impact of voice and web browsing services on the system dimensioning. In spite of the accuracy of dimensioning and planning, load imbalances occur for different reasons, which result in small-scale congestion events in the system. A load equalisation scheme is proposed which utilises the overlapping areas between neighbouring cells in order to eliminate the load imbalances. Essentially, coverage overlapping is needed in order to achieve ubiquitous coverage, hence to eliminate coverage holes. However, excessive overlapping results in capacity loss in interference-limited systems which is virtually the case with all modern systems. Radio coverage optimisation is needed but today this is performed on a cell-by- cell basis producing sub-optimal results. This thesis proposes an advanced coverage optimisation algorithm which takes into consideration simultaneously all cells within the considered area. For the operators (and also the proposed coverage optimisation algorithm) it is Imperative to have accurate path loss predictions. However, contemporary planning tools come with certain limitations, and often time-consuming and expensive measurement campaigns are organised. This thesis builds on the assumption that mobile systems will be able to locate the position of mobile terminals and subsequently proposes an automated process for the estimation of the radio coverage of the network. Lastly, the assumption regarding the positioning capabilities of the mobile systems Is further exploited in order to enhance the QoS guarantees to mobile users. Thus, various algorithms are examined which perform handovers towards base stations which maximise the survivability of the handed over calls.
136

Synthesis and design of demethaniser flowsheets for low temperature separation processes

Nawaz, Muneeb January 2011 (has links)
A demethaniser process is characterised by interactions between the complex distillation column and other flowsheet units, including the turbo-expander, flash units, multistream exchangers and refrigeration system. When a design problem dealing with demethaniser flowsheets is approached in a systematic way, the number of alternatives to be studied is generally very large. The assessment of all possible flowsheets with numerous options is a time consuming task with many simulations required to select the most economic option. This research presents a systematic approach for demethaniser flowsheet synthesis to generate cost-effective designs with minimal time and effort. A demethaniser column has many degrees of freedom, including the operating pressure, multiple feeds, the number and duty of side reboilers and the flow rate of the external reflux stream. The additional feed and side reboiler streams enhance the efficiency of the process, but complicate process modelling. The number of design variables is also augmented by additional degrees of freedom such as the location and the order of feeds, the number of stages and the reflux ratio in the column. The complexity of the demethaniser column precludes the use of the Fenske–Underwood–Gilliland shortcut design method. A semi-rigorous boundary value method is proposed for the design of complex demethaniser columns for application within an optimisation framework for process synthesis and evaluation. The results of the proposed design methodology are shown to be in good agreement with those of rigorous simulation. A simplified flowsheet simulation model based on a sequential modular approach is developed that is able to account for various configurations and inter-connections in the demethaniser process. Improved shortcut models for flash units, the turbo-expander, compressor and refrigeration cycle have been proposed for exploitation in a synthesis framework. A methodology accounting for heat integration in multistream exchangers is proposed. The simplified simulation model is applied for the optimisation of a flowsheet of fixed configuration. The nonlinear programming technique of sequential quadratic programming (SQP) is used as the optimisation method. A case study is presented to illustrate the application of the optimisation approach for maximising the annual profit. A generalised superstructure has been proposed for demethaniser flowsheet synthesis that includes various structural combinations in addition to the operational parameters. The various options included in the superstructure and their effects on flowsheet performance are discussed. A stochastic optimisation technique, simulated annealing, is applied to optimise the superstructure and generate energy-efficient and cost-effective flowsheets. The application of the developed synthesis methodology is illustrated by a case study of relevance to natural gas processing. The results allow insights to be obtained into the important trade-offs and interactions and indicate that the synthesis methodology can be employed as a tool for quantitative evaluation of preliminary designs as well as to facilitate evaluation, selection and optimisation of licensed demethaniser flowsheets.
137

Comparative pharmacokinetics of a single and double dose of a conventional oxytetracycline formulation in sheep, to allow for therapeutic optimisation

Snyman, Mathys Gerhardus 16 February 2009 (has links)
In the veterinary industry, long acting oxytetracycline formulations are loosely referred to as those formulations that only require a single dose at 20 mg/kg to achieve clinical cure and to be repeated after three days only if required. Short acting oxytetracycline formulations are recommended for use once a day for four days, at a dose of 10 mg/kg IV and 10mg/kg IM on day one, 10 mg/kg IM on day two and 5 mg/kg IM on days three and four. The primary objective of this study is to demonstrate that, based on pharmacokinetics, a double dose of a conventional short acting, 135mg/ml formulation of oxytetracycline has a longer action than a single dose of the same formulation. As a secondary objective the efficacy and safety of a single, double dose of a conventional oxytetracycline formulation are compared to multiple, single doses of a conventional formulation as well against a single dose of a long-acting formulation. Factors that influence the duration of action of a parenteral oxytetracycline formulation are reviewed, as are the pharmacokinetic / pharmacodynamic relationship of oxytetracycline. A single dose, randomized, two treatment, two sequence cross-over experimental design as described by Grizzle (1965) was selected for this study. The washout period between the two sequences was determined using at least 5 half-lives (11.1 hours x 5) of conventional oxytetracycline formulation , based on a study by Davey et al (1985) Although a wash-out period of 55.5 hours for a dose rate 20 mg/kg of oxytetracycline would have sufficed to ensure the absence of any residual drug in the central compartment of the experimental animals, it was decided to extend the washout period between treatment periods to 7 days (168 hours) for mainly practical reasons. Sample size determination was based on the rejection of the null hypothesis as described by Anderson and Hauck (1983). 5 animals per treatment group were selected. The sheep were equally and randomly assigned to either the group that would receive the 10 mg/kg dose first (group 1), or the group that would receive the 20 mg/kg dose first (group 2). For the cross over treatment (phase 2), the animals remained in the groups they were allocated to for phase 1, but group 1 received the 20 mg/kg dose and group 2 received the 10 mg/kg dose. The volume of oxytetracycline was calculated based on a product oxytetracycline content of 135 mg/ml. The blood sample collection procedure was the same for phases (treatments) 1 and 2. Time 0 was the time of treatment. Samples were collected into 10ml lithium heparinized vacutainer glass tubes with 19G disposable needles at the following intervals (hrs): 0, 0.25, 0.5, 1, 2, 4, 6, 9, 12, 24, 36, 48, 72, 96. The oxytetracycline concentrations in plasma were determined using validated High Performance Liquid Chromatographic methodology. The difference between the 2 sets of results emanating from phase 1 and phase 2 of the study are used as basis for presenting the results. Three pharmacokinetic parameters were used to compare the 2 treatments: Cmax (maximum plasma concentration) , AUCinf (Total area under the concentration curve) and T>0.5 (Time that the drug concentration remains above 0.5 ìg/ml). The geometric means of the results show that: The 20mg/kg treatment maintains levels above 0.5 ìg/ml significantly longer than the 10 mg/kg treatment. (37.4 hours versus 24 hours; p value 0.0013) The 20 mg/kg dose reaches a significantly higher concentration than does the 10 mg/kg dose. (6.59ìg/ml versus 3.55ìg/ml; p value 0.000000) The 20mg/kg treatment has an AUCinf which is greater than the 10 mg/kg treatment by a highly significant margin. (120.63 ìg/ml*hr versus 71.63 ìg/ml*hr; p value 0.000001) In demonstrating that a conventional oxytetracycline formulation administered intramuscularly at double dose provides drug plasma concentrations above MIC for an average duration of 13 hours longer than a single dose, the primary objective of the study was achieved. The study demonstrated that a single dose at 20 mg/kg of a conventional oxytetracycline formulation offers an acceptable alternative to conventional treatment regimes in terms of efficacy, target animal safety, as well as convenience to the user. / Dissertation (MMedVet)--University of Pretoria, 2008. / Paraclinical Sciences / unrestricted
138

Truss topology optimization using an improved species-conserving genetic algorithm

Li, Jian-Ping 06 February 2014 (has links)
Yes / The aim of this article is to apply and improve the species-conserving genetic algorithm (SCGA) to search multiple solutions of truss topology optimization problems in a single run. A species is defined as a group of individuals with similar characteristics and is dominated by its species seed. The solutions of an optimization problem will be selected from the found species. To improve the accuracy of solutions, a species mutation technique is introduced to improve the fitness of the found species seeds and the combination of a neighbour mutation and a uniform mutation is applied to balance exploitation and exploration. A real vector is used to represent the corresponding cross-sectional areas and a member is thought to be existent if its area is bigger than a critical area. A finite element analysis model was developed to deal with more practical considerations in modelling, such as the existence of members, kinematic stability analysis, and computation of stresses and displacements. Cross-sectional areas and node connections are decision variables and optimized simultaneously to minimize the total weight of trusses. Numerical results demonstrate that some truss topology optimization examples have many global and local solutions, different topologies can be found using the proposed algorithm on a single run and some trusses have smaller weights than the solutions in the literature.
139

Next generation of optimization and interactive planning algorithms for brachytherapy treatments

Bélanger, Cédric 19 January 2024 (has links)
Titre de l'écran-titre (visionné le 12 janvier 2024) / La curiethérapie est une modalité de traitement du cancer utilisant le rayonnement ionisant d'une source radioactive. En curiethérapie à haut débit de dose (HDR), un appareil motorisé blindé est utilisé pour guider la source radioactive à proximité ou à l'intérieur de la tumeur par l'intermédiaire d'applicateurs intracavitaires (IC) et/ou de cathéters interstitiels (IS). En s'arrêtant un certain temps (temps d'arrêt) à des positions spécifiques (positions d'arrêt), une dose de rayonnement conforme peut être adminisitrée à la tumeur tout en épargnant les organes à risque (OARs) avoisinants. Cependant, en raison de la nature du rayonnement ionisant, il est impossible d'administrer une dose de radiation curative à la tumeur sans exposer les OARs. Ces objectifs contradictoires doivent donc être optimisés simultanément. Par conséquent, le problème de planification de traitement en curiethérapie est intrinsèquement un problème d'optimisation multicritère (MCO), où de nombreuses solutions optimales (solutions Pareto-optimales) caractérisent les compromis cliniquement importants. Actuellement, les algorithmes commerciaux de planification en curiethérapie sont limités à l'ajustement manuel d'un objectif et/ou des temps d'arrêt. À cet égard, les algorithmes de planification inverse ne peuvent générer qu'un seul plan de traitement par cycle d'optimisation (en quelques secondes de temps de calcul) sans garantie de rencontrer les critères cliniques lors du premier cycle. Cette approche peut rendre la tâche de planification itérative et fastidieuse pour les planificateurs/planificatrices. Par conséquent, la qualité du plan peut dépendre des compétences de l'utilisateur/utilisatrice. En outre, la génération itérative d'un plan de traitement par cycle d'optimisation, comme c'est le cas en clinique, ne permet pas au planificateur/ planificatrice d'explorer facilement les compromis entre le tumeur cible et les OARs. La littérature présente également une lacune importante en ce qui concerne les méthodes d'optimisation permettant d'intégrer efficacement les applicateurs IC/IS complexes récents (par exemple, l'applicateur Venezia fabriqué par Elekta, Veenendaal, Pays-Bas) pour la curiethérapie du cancer du col de l'utérus. Le principal défi pour ces applicateurs complexes est de déterminer automatiquement le nombre optimal de cathéters, leur position et leur profondeur compte tenu du grand nombre de degrés de liberté dans le problème d'optimisation et des grandes variations dans la forme des tumeurs. Pour résoudre ces problèmes, cette thèse propose une nouvelle génération d'algorithmes d'optimisation et de planification interactive pour la curiethérapie. Pour atteindre cet objectif, un algorithme MCO (gMCO) basé sur une unité de processeur graphique (GPU) est d'abord mis en œuvre et comparé à un algorithme de planification inverse standard utilisé en clinique. gMCO met en œuvre un nouveau schéma d'optimisation des plans en parallèle sur l'architecture GPU permettant d'optimiser des milliers de plans Pareto-optimaux en quelques secondes. Ensuite, pour tirer pleinement profit de MCO en clinique, une interface graphique interactive appelée gMCO-GUI est développée. Cette interface permet au planificateur/planificatrice de naviguer et d'explorer les compromis en temps réel à partir des plans Pareto-optimaux générés par gMCO. gMCO-GUI permet entre autre d'afficher les indices dose-volume histogram (DVH), les courbes DVH et les lignes d'isodose pendant la navigation. Pour intégrer le flux de travail MCO dans la clinique, la mise en service de gMCO et de gMCO-GUI est effectuée en comparaison avec Oncentra Prostate et Oncentra Brachy, deux systèmes de planification de traitement largement utilisés. Suite à la mise en service, afin de caractériser l'utilisation de la planification interactive MCO en clinique, une étude inter-observateurs est menée. Deux physiciens/physiciennes expérimentés sont invités à replanifier 20 cas de cancer de la prostate chacun à l'aide de la planification interactive MCO. La qualité des plans préférés (obtenus par navigation) est comparée entre les deux physiciens/ phyciennes et le temps de planification MCO est enregistré. De plus, trois radio-oncologues sont invités à comparer l'aveugle les plans MCO (générés par les physiciens/physiciennes) et les plans cliniques afin d'établir le meilleur plan pour chaque patient. Finalement, motivé par le manque d'algorithmes d'optimisation des cathéters et de la dose dans le traitement du cancer du col de l'utérus dans les logiciels commerciaux et dans la littérature, un nouvel algorithme d'optimisation multicritère des cathéters pour les applicateurs IC/IS complexes tels que l'applicateur Venezia est conçu. Le problème d'optimisation avec l'applicateur Venezia est difficile car les composants de l'applicateur ne sont pas coplanaires. Le gain dosimétrique de l'optimisation simultanée des cathéters et MCO est comparé à MCO seul (cathéters cliniques) et aux plans cliniques basé sur les critères EMBRACE-II. En résumé, une nouvelle génération d'algorithmes d'optimisation et de planification interactive est développée pour la curiethérapie. Les cinq chapitres principaux de cette thèse rapportent les résultats et les contributions scientifiques de ces algorithmes comparés à la planification clinique standard. La thèse guide également les utilisateurs/utilisatrices dans l'intégration du flux de travail MCO interactif dans la clinique. / Brachytherapy is a treatment modality for cancer using ionizing radiation of a radioactive source. In high-dose-rate (HDR) brachytherapy, an afterloading unit is used to guide the radioactive source near or inside the tumor via intracavity (IC) applicators and/or interstitial (IS) catheters. By stopping a specific amount of time (dwell time) at specific positions (dwell positions), a conformal radiation dose can be delivered to the tumor while spearing nearby organs at risk (OARs). However, because of the nature of ionizing radiation, it is in fact impossible to deliver the curative dose to the tumor without exposing OARs. Instead, those conflicting objectives need to be simultaneously optimized. Therefore, the planning problem in HBR is inherently a multi-criteria optimization (MCO) problem, where many optimal solutions (Pareto-optimal solutions) can effectively characterize the clinically relevant trade-offs. Current commercial planning algorithms in HDR brachytherapy are limited to the manual fine-tuning of an objective and/or dwell times. In that regard, inverse planning algorithms can generate only one treatment plan per optimization run (few seconds of optimization time) without guarantee of meeting clinical goals in the first run, which makes the planning task iterative and cumbersome for the planners. Therefore, the plan quality may be dependent on the user skills. Furthermore, iterative generation of one treatment plan per optimization run as done in the clinic does not easily allow the planner to explore the trade-offs between targets and OARs. There is also an important gap in optimization methods in the literature to efficiently incorporate recent complex IC/IS applicators (e.g., the Venezia applicator manufactured by Elekta, Veenendaal, The Netherlands) for cervical cancer brachytherapy. The main challenge for these complex applicators is to automatically determine the optimal IS catheter number, position, and depth given large number of degrees of freedom in the optimization problem and large variations in tumor shapes. To address these problems, this thesis proposes next generation of optimization and interactive planning algorithms for brachytherapy. A graphics processing unit (GPU)-based MCO algorithm (gMCO) is first implemented and compared with a standard inverse planning algorithm used in the clinic. gMCO implements a novel parallel plan optimization scheme on GPU architecture that can optimize thousands of Pareto-optimal plans within seconds. Next, to fully benefit of MCO in the clinic, an interactive graphical user interface called gMCO-GUI is developed to allow the planner to navigate and explore the trade-offs in real-time through gMCO-generated plans. gMCO-GUI enables the display of dose-volume histogram (DVH) indices, DVH curves, and isodose lines during the plan navigation. To incorporate the proposed MCO workflow the clinic, the commissioning of gMCO and gMCO-GUI is conducted against Oncentra Prostate and Oncentra Brachy, two widely used treatment planning systems. Following the commissioning, and to further characterize the utilization of MCO interactive planning in the clinic, an inter-observer study is conducted. Two experienced physicists are asked to re-plan 20 prostate cases each using MCO interactive planning. The quality of the preferred plans (obtained by plan navigation) is compared between the two physicists and the MCO planning time is recorded. In addition, three radiation oncologists are invited to blindly compare MCO plans (generated by physicists) and clinical plans to assess the best plan for each patient. Finally, motivated by the lack of catheter and dose optimization algorithms in the treatment of cervical cancer in commercial software and in the literature, a novel simultaneous catheter optimization and MCO algorithm for complex IC/IS applicators such as the Venezia applicator is designed. The optimization problem with the Venezia applicator is challenging because the applicator components are non coplanar. The dosimetric gain of simultaneous catheter optimization and MCO is compared with MCO alone (clinical catheters), and clinical plans following EMBRACE-II criteria. In summary, next generation of optimization and interactive planning algorithms are developed for brachytherapy. The five main chapters of this thesis report the findings and scientific contributions of these algorithms compared with standard clinical planning. The thesis also guide users in the integration of the proposed interactive MCO workflow in the clinic.
140

Dual sequential approximation methods in structural optimisation

Wood, Derren Wesley 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: This dissertation addresses a number of topics that arise from the use of a dual method of sequential approximate optimisation (SAO) to solve structural optimisation problems. Said approach is widely used because it allows relatively large problems to be solved efficiently by minimising the number of expensive structural analyses required. Some extensions to traditional implementations are suggested that can serve to increase the efficacy of such algorithms. The work presented herein is concerned primarily with three topics: the use of nonconvex functions in the definition of SAO subproblems, the global convergence of the method, and the application of the dual SAO approach to large-scale problems. Additionally, a chapter is presented that focuses on the interpretation of Sigmund’s mesh independence sensitivity filter in topology optimisation. It is standard practice to formulate the approximate subproblems as strictly convex, since strict convexity is a sufficient condition to ensure that the solution of the dual problem corresponds with the unique stationary point of the primal. The incorporation of nonconvex functions in the definition of the subproblems is rarely attempted. However, many problems exhibit nonconvex behaviour that is easily represented by simple nonconvex functions. It is demonstrated herein that, under certain conditions, such functions can be fruitfully incorporated into the definition of the approximate subproblems without destroying the correspondence or uniqueness of the primal and dual solutions. Global convergence of dual SAO algorithms is examined within the context of the CCSA method, which relies on the use and manipulation of conservative convex and separable approximations. This method currently requires that a given problem and each of its subproblems be relaxed to ensure that the sequence of iterates that is produced remains feasible. A novel method, called the bounded dual, is presented as an alternative to relaxation. Infeasibility is catered for in the solution of the dual, and no relaxation-like modification is required. It is shown that when infeasibility is encountered, maximising the dual subproblem is equivalent to minimising a penalised linear combination of its constraint infeasibilities. Upon iteration, a restorative series of iterates is produced that gains feasibility, after which convergence to a feasible local minimum is assured. Two instances of the dual SAO solution of large-scale problems are addressed herein. The first is a discrete problem regarding the selection of the point-wise optimal fibre orientation in the two-dimensional minimum compliance design for fibre-reinforced composite plates. It is solved by means of the discrete dual approach, and the formulation employed gives rise to a partially separable dual problem. The second instance involves the solution of planar material distribution problems subject to local stress constraints. These are solved in a continuous sense using a sparse solver. The complexity and dimensionality of the dual is controlled by employing a constraint selection strategy in tandem with a mechanism by which inconsequential elements of the Jacobian of the active constraints are omitted. In this way, both the size of the dual and the amount of information that needs to be stored in order to define the dual are reduced. / AFRIKAANSE OPSOMMING: Hierdie proefskrif spreek ’n aantal onderwerpe aan wat spruit uit die gebruik van ’n duale metode van sekwensi¨ele benaderde optimering (SBO; sequential approximate optimisation (SAO)) om strukturele optimeringsprobleme op te los. Hierdie benadering word breedvoerig gebruik omdat dit die moontlikheid skep dat relatief groot probleme doeltreffend opgelos kan word deur die aantal duur strukturele analises wat vereis word, te minimeer. Sommige uitbreidings op tradisionele implementerings word voorgestel wat kan dien om die doeltreffendheid van sulke algoritmes te verhoog. Die werk wat hierin aangebied word, het hoofsaaklik betrekking op drie onderwerpe: die gebruik van nie-konvekse funksies in die defini¨ering van SBO-subprobleme, die globale konvergensie van die metode, en die toepassing van die duale SBO-benadering op grootskaalse probleme. Daarbenewens word ’n hoofstuk aangebied wat fokus op die interpretasie van Sigmund se maasonafhanklike sensitiwiteitsfilter (mesh independence sensitivity filter) in topologie-optimering. Dit is standaard praktyk om die benaderde subprobleme as streng konveks te formuleer, aangesien streng konveksiteit ’n voldoende voorwaarde is om te verseker dat die oplossing van die duale probleem ooreenstem met die unieke stasionˆere punt van die primaal. Die insluiting van niekonvekse funksies in die definisie van die subprobleme word selde gepoog. Baie probleme toon egter nie-konvekse gedrag wat maklik deur eenvoudige nie-konvekse funksies voorgestel kan word. In hierdie werk word daar gedemonstreer dat sulke funksies onder sekere voorwaardes met vrug in die definisie van die benaderde subprobleme inkorporeer kan word sonder om die korrespondensie of uniekheid van die primale en duale oplossings te vernietig. Globale konvergensie van duale SBO-algoritmes word ondersoek binne die konteks van die CCSAmetode, wat afhanklik is van die gebruik en manipulering van konserwatiewe konvekse en skeibare benaderings. Hierdie metode vereis tans dat ’n gegewe probleem en elk van sy subprobleme verslap word om te verseker dat die sekwensie van iterasies wat geproduseer word, toelaatbaar bly. ’n Nuwe metode, wat die begrensde duaal genoem word, word aangebied as ’n alternatief tot verslapping. Daar word vir ontoelaatbaarheid voorsiening gemaak in die oplossing van die duaal, en geen verslappings-tipe wysiging word benodig nie. Daar word gewys dat wanneer ontoelaatbaarheid te¨engekom word, maksimering van die duaal-subprobleem ekwivalent is aan minimering van sy begrensingsontoelaatbaarhede (constraint infeasibilities). Met iterasie word ’n herstellende reeks iterasies geproduseer wat toelaatbaarheid bereik, waarna konvergensie tot ’n plaaslike KKT-punt verseker word. Twee gevalle van die duale SBO-oplossing van grootskaalse probleme word hierin aangespreek. Die eerste geval is ’n diskrete probleem betreffende die seleksie van die puntsgewyse optimale veselori¨entasie in die tweedimensionele minimum meegeefbaarheidsontwerp vir veselversterkte saamgestelde plate. Dit word opgelos deur middel van die diskrete duale benadering, en die formulering wat gebruik word, gee aanleiding tot ’n gedeeltelik skeibare duale probleem. Die tweede geval behels die oplossing van in-vlak materiaalverspredingsprobleme onderworpe aan plaaslike spanningsbegrensings. Hulle word in ’n kontinue sin opgelos met die gebruik van ’n yl oplosser. Die kompleksiteit en dimensionaliteit van die duaal word beheer deur gebruik te maak van ’n strategie om begrensings te selekteer tesame met ’n meganisme waardeur onbelangrike elemente van die Jacobiaan van die aktiewe begrensings uitgelaat word. Op hierdie wyse word beide die grootte van die duaal en die hoeveelheid inligting wat gestoor moet word om die duaal te definieer, verminder.

Page generated in 0.081 seconds