• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 39
  • 27
  • 12
  • 8
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 218
  • 218
  • 62
  • 55
  • 38
  • 37
  • 37
  • 30
  • 29
  • 25
  • 22
  • 21
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Contribution à la modélisation et à la régulation du trafic aux intersections : intégration des communications Vehicule-Infrastructure / Contribution of modelling and traffic control at intersections : Integration with the communication Vehicles-Infrastructure

Yan, Fei 14 March 2012 (has links)
Dans ce mémoire de thèse, nous avons étudié le problème de régulation du trafic en considérant les nouvelles technologies dans le cadre des Systèmes de Transport Intelligent (STI). Une nouvelle stratégie de contrôle est introduite afin d’exploiter le potentiel des infrastructures de la circulation à un niveau maximum. Plus précisément, basée sur la technologie VII « Intégration Véhicule-Infrastructure », l'infrastructure routière aux carrefours (considérée aussi comme contrôleur) peut communiquer avec les véhicules autonomes qui arrivent à un carrefour de manière continue. Les données importantes sur les véhicules telles que la vitesse, la position et la destination sont alors reçues par des capteurs avancés et envoyées au contrôleur en temps réel. Par conséquent, il est possible d'élaborer une stratégie de contrôle du trafic en considérant chaque véhicule comme une entité indépendante. En d'autres termes, le droit de passage est attribué à chaque véhicule en fonction de son état et en fonction de l'état global du trafic au carrefour. Seuls les véhicules qui ont reçu le droit de passage peuvent traverser le carrefour. Le contrôle du trafic au niveau d’un carrefour vise donc à déterminer les séquences de passage des véhicules, c’est-à-dire les séquences de distribution des droits de passage.Cependant, la plus grande difficulté pour appliquer cette nouvelle stratégie est la contradiction entre l'optimisation des séquences de passages des véhicules et la complexité temporelle. Pour résoudre cette contradiction, nous avons d’abord formulé mathématiquement la problématique de régulation et nous avons ensuite étudié sa complexité. Nous avons prouvé dans un premier temps que le problème de régulation du trafic formulé à l’intersection isolée est NP-hard sous certaines conditions (nombre arbitraire de groupes de flux compatibles GFC,…) et ceci en se basant sur la réduction au problème de 3-Partition. Dans un deuxième temps, nous avons appliqué les méthodes de résolutions exactes sur un carrefour isolé pour proposer des algorithmes exacts (Branch and Bound et Programmation dynamique) permettant de trouver une séquence de passage optimale. Plusieurs propriétés du problème ont été introduites et prouvées et ceci afin qu’elles soient exploitées par ces algorithmes. Ces propriétés ont pour objectif de réduire considérablement l’espace de recherche et par conséquent le temps d’exécution de ces algorithmes exacts.Par ailleurs, nous n’avons pas limité nos recherches sur des carrefours isolées mais nous avons appliqué l’approche de contrôle proposée sur un réseau de carrefours tout en considérant un seul contrôleur. Cependant, un algorithme exact appliqué sur plusieurs carrefours ne peut pas être assez rapide surtout lorsqu’on a besoin de communiquer presque instantanément des informations aux véhicules (en temps réel). Nous avons proposé donc des méthodes de résolutions approchées afin de trouver en un temps raisonnable une séquence de passage satisfaisante pour chaque carrefour. Ces algorithmes (Algorithmes génétiques) ont en effet, besoin de moins de temps de calcul tout en assurant une bonne qualité de solution.Enfin, nous illustrons la mise en œuvre des déférentes approches proposées à travers des résultats de simulation afin d’évaluer leurs performances. / In this thesis, we studied the problem of traffic control by considering the new technologies as part of Intelligent Transport Systems (ITS). A new control strategy is introduced to exploit the potential of infrastructure traffic at a maximum level. Specifically, based Technology VII "Vehicle-Infrastructure Integration", the road infrastructure at intersections (considered also as a controller) can communicate with autonomous vehicles that arrive at a crossroads on a continuous basis. Important data such as vehicle speed, position and destination are then received by advanced sensors and sent to the controller in real time. Therefore, it is possible to develop a strategy for traffic control by treating each vehicle as an independent entity. In other words, the right of way is assigned to each vehicle based on its status and function of the overall state of traffic at the intersection. Only vehicles that have received the right of way may cross the junction. Traffic control at an intersection is therefore to determine the sequence of passage of vehicles, that is to say the sequences distribution rights passage.Cependant, the greatest difficulty to implement this new strategy is the contradiction between the optimization of sequences of passes of vehicles and time complexity. To resolve this contradiction, we first mathematically formulated the problem of regulation and we then studied its complexity. We proved initially that the problem of traffic control at the intersection isolated formulated is NP-hard under certain conditions (arbitrary number of groups CFA compliant streams, ...) and this is based on reducing the problem of 3-Partition. In a second step, we applied the methods of accurate resolutions on an isolated intersection to propose exact algorithms (Branch and Bound and Dynamic Programming) for finding an optimal sequence of passage. Several properties of the problem have been introduced and this proved and so they are exploited by these algorithms. These properties are intended to significantly reduce the search space and consequently the execution time of these algorithms exacts.Par Moreover, we have not limited our research on isolated intersections but we applied the approach control proposed a network of nodes while considering a single controller. However, an exact algorithm applied to several intersections can not be fast enough especially when you need to communicate information almost instantaneously to vehicles (real time). So we proposed methods to find approximate resolutions in a reasonable time a sequence of way satisfactory to each intersection. These algorithms (Genetic Algorithms) have indeed require less computation time while maintaining a good quality of solution.Enfin, we illustrate the implementation of deferential proposed approaches through simulation results to evaluate their performance .
162

Applications of Integer Quadratic Programming in Control and Communication

Axehill, Daniel January 2005 (has links)
<p>The main topic of this thesis is integer quadratic programming with applications to problems arising in the areas of automatic control and communication. One of the most widespread modern control principles is the discrete-time method Model Predictive Control (MPC). The main advantage with MPC, compared to most other control principles, is that constraints on control signals and states can easily be handled. In each time step, MPC requires the solution of a Quadratic Programming (QP) problem. To be able to use MPC for large systems, and at high sampling rates, optimization routines tailored for MPC are used. In recent years, the range of application of MPC has been extended from constrained linear systems to so-called hybrid systems. Hybrid systems are systems where continuous dynamics interact with logic. When this extension is made, binary variables are introduced in the problem. As a consequence, the QP problem has to be replaced by a far more challenging Mixed Integer Quadratic Programming (MIQP) problem. Generally, for this type of optimization problems, the computational complexity is exponential in the number of binary optimization variables. In modern communication systems, multiple users share a so-called multi-access channel, where the information sent by different users is separated by using almost orthogonal codes. Since the codes are not completely orthogonal, the decoded information at the receiver is slightly correlated between different users. Further, noise is added during the transmission. To estimate the information originally sent, a maximum likelihood problem involving binary variables is solved. The process of simultaneously estimating the information sent by multiple users is called multiuser detection. In this thesis, the problem to efficiently solve MIQP problems originating from MPC is addressed. Two different algorithms are presented. First, a polynomial complexity preprocessing algorithm for binary quadratic programming problems is presented. By using the algorithm, some, or all, binary variables can be computed efficiently already in the preprocessing phase. In simulations, the algorithm is applied to unconstrained MPC problems with a mixture of real and binary control signals. It has also been applied to the multiuser detection problem, where simulations have shown that the bit error rate can be significantly reduced by using the proposed algorithm as compared to using common suboptimal algorithms. Second, an MIQP algorithm tailored for MPC is presented. The algorithm uses a branch and bound method where the relaxed node problems are solved by a dual active set QP algorithm. In this QP algorithm, the KKT-systems are solved using Riccati recursions in order to decrease the computational complexity. Simulation results show that both the QP solver and the MIQP solver proposed have lower computational complexity than corresponding generic solvers.</p> / Report code: LiU-TEK-LIC-2005:71.
163

On Optimization in Design of Telecommunications Networks with Multicast and Unicast Traffic

Prytz, Mikael January 2002 (has links)
No description available.
164

Algorithmes Branch and Bound parallèles hétérogènes pour environnements multi-coeurs et multi-GPU

Chakroun, Imen 28 June 2013 (has links) (PDF)
Les algorithmes Branch and Bound (B&B) sont attractifs pour la résolution exacte de problèmes d'optimisation combinatoire (POC) par exploration d'un espace de recherche arborescent. Néanmoins, ces algorithmes sont très gourmands en temps de calcul pour des instances de problèmes de grande taille (exemple : benchmarks de Taillard pour FSP) même en utilisant le calcul sur grilles informatiques [Mezmaz et al., IEEE IPDPS'2007]. Le calcul massivement parallèle fourni à travers les plates-formes de calcul hétérogènes d'aujourd'hui [TOP500 ] est requis pour traiter effi cacement de telles instances. Le dé fi est alors d'exploiter tous les niveaux de parallélisme sous-jacents et donc de repenser en conséquence les modèles parallèles des algorithmes B&B. Dans cette thèse, nous nous attachons à revisiter la conception et l'implémentation des ces algorithmes pour la résolution de POC de grande taille sur (larges) plates-formes de calcul multi-coeurs et multi-GPUs. Le problème d'ordonnancement Flow-Shop (FSP) est considéré comme étude de cas. Une étude expérimentale préliminaire sur quelques grandes instances du FSP a révélé que l'arbre de recherche est hautement irrégulier (en forme et en taille) et très large (milliards de milliards de noeuds), et que l'opérateur d'évaluation des bornes est exorbitant en temps de calcul (environ 97% du temps de B&B). Par conséquent, notre première contribution est de proposer une approche GPU avec un seul coeur CPU (GB&B) dans laquelle seul l'opérateur d'évaluation est exécuté sur GPU. L'approche traite deux dé fis: la divergence de threads et l'optimisation de la gestion de la mémoire hiérarchique du GPU. Comparée à une version séquentielle, des accélérations allant jusqu'à ( 100) sont obtenues sur Nvidia Tesla C2050. L'analyse des performances de GB&B a montré que le surcoût induit par le transfert des données entre le CPU et le GPU est élevé. Par conséquent, l'objectif de la deuxième contribution est d'étendre l'approche (LL-GB&B) a fin de minimiser la latence de communication CPU-GPU. Cet objectif est réalisé grâce à une parallélisation à grain fin sur GPU des opérateurs de séparation et d'élagage. Le défi majeur relevé ici est la divergence de threads qui est due à la nature fortement irrégulière citée ci-dessus de l'arbre exploré. Comparée à une exécution séquentielle, LL-GB&B permet d'atteindre des accélérations allant jusqu'à ( 160) pour les plus grandes instances. La troisième contribution consiste à étudier l'utilisation combinée des GPUs avec les processeurs multi-coeurs. Deux scénarios ont été explorés conduisant à deux approches: une concurrente (RLL-GB&B) et une coopérative (PLL-GB&B). Dans le premier cas, le processus d'exploration est eff ectué simultanément par le GPU et les coeurs du CPU. Dans l'approche coopérative, les coeurs du CPU préparent et transfèrent les sous-problèmes en utilisant le streaming CUDA tandis que le GPU eff ectue l'exploration. L'utilisation combinée du multi-coeur et du GPU a montré que l'utilisation de RLL-GB&B n'est pas bénéfi que et que PLL-GB&B permet une amélioration allant jusqu'à (36%) par rapport à LL-GB&B. Sachant que récemment des grilles de calcul comme Grid5000 (certains sites) ont été équipées avec des GPU, la quatrième contribution de cette thèse traite de la combinaison du calcul sur GPU et multi-coeur avec le calcul distribué à grande échelle. Pour ce faire, les diff érentes approches proposées ont été réunies dans un méta-algorithme hétérofigène qui sélectionne automatiquement l'algorithme à déployer en fonction de la con figuration matérielle cible. Ce méta-algorithme est couplé avec l'approche B&B@Grid proposée dans [Mezmaz et al., IEEE IPDPS'2007]. B&B@Grid répartit les unités de travail (sous-espaces de recherche codés par des intervalles) entre les noeuds de la grille tandis que le méta-algorithme choisit et déploie localement un algorithme de B&B parallèle sur les intervalles reçus. L'approche combinée nous a permis de résoudre à l'optimalité et e fficacement les instances (20 20) de Taillard.
165

Applicability of deterministic global optimization to the short-term hydrothermal coordination problem

Ferrer Biosca, Alberto 30 March 2004 (has links)
Esta Tesis esta motivada por el interés en aplicar procedimientos de optimización global a problemas del mundo real. Para ello, nos hemos centrado en el problema de Coordinación Hidrotérmica de la Generación Eléctrica a Corto Plazo (llamado Problema de Generación en esta Tesis) donde la función objetivo y las restricciones no lineales son polinomios de grado como máximo cuatro. En el Problema de Generación no tenemos disponible una representación en diferencia convexa de las funciones involucradas ni tampoco es posible utilizar la estructura del problema para simplificarlo. No obstante, cuando disponemos de una función continua f(x) definida en un conjunto cerrado y no vacío S el problema puede transformarse en otro equivalente expresado mediante minimize l(z) subject to z 2 D n int. (programa d.c. canónico), donde l(z) es una función convexa (en general suele ser una función lineal) con D y C conjuntos convexos y cerrados. Una estructura matemática tal como Dnint C no resulta siempre aparente y aunque lo fuera siempre queda por realizar una gran cantidad de cálculos para expresarla de manera que se pueda resolver el problema de una manera eficiente desde un punto de vista computacional.La característica más importante de esta estructura es que aparecen conjuntos convexos y complementarios de conjuntos convexos. Por este motivo en tales problemas se pueden usar herramientas analíticas tales como subdifernciales y hiperplanos soporte. Por otro lado, como aparecen conjuntos complementarios de conjuntos convexos, estas herramientas analíticas se deben usar de una manera determinada y combinándolas con herramientas combinatorias tales como cortes por planos, Branco and bound y aproximación interior.En esta tesis se pone de manifiesto la estructura matemática subyacente en el Problema de Generación utilizando el hecho de que los polinomios son expresables como diferencia de funciones convexas. Utilizando esta propiedad describimos el problema como un programa d.c. canónico equivalente. Pero aun mas, partiendo de la estructura de las funciones del Problema de Generación es posible rescribirlo de una manera mas conveniente y obtener de este modo ventajas numéricas desde elpunto de vista de la implementación.Basándonos en la propiedad de que los polinomios homogéneos de grado 1 son un conjunto de generadores del espacio vectorial de los polinomios homogéneos de grado m hemos desarrollamos los conceptos y propiedades necesarios que nos permiten expresar un polinomio cualquiera como diferencia de polinomios convexos, También, se ha desarrollado y demostrado la convergencia de un nuevo algoritmo de optimización global (llamado Algoritmo Adaptado) que permite resolver el Problema de Generación. Como el programa equivalente no esta acotado se ha introducido una técnica de subdivisión mediante prismas en lugar de la habitual subdivisión mediante conos.Para obtener una descomposición óptima de un polinomio en diferencia de polinomios convexos, se ha enunciado el Problema de Norma Mínima mediante la introducción del concepto de Descomposición con Mínima Desviación, con lo cual obtenemos implementaciones m´as eficientes, al reducir el n´umero de iteraciones del Algoritmo Adaptado. Para resolver el problema de Norma Mínima hemos implementado un algoritmo de programación cuadrática semi-infinita utilizando una estrategia de build-up and build-down, introducida por Den Hertog (1997) para resolver programas lineales semi-infinitos, la cual usa un procedimiento de barrera logarítmica.Finalmente, se describen los resultados obtenidos por la implementación de los algoritmos anteriormente mencionados y se dan las conclusiones. / This Thesis has been motivated by the interest in applying deterministic global optimization procedures to problems in the real world with no special structures. We have focused on the Short-Term Hydrothermal Coordination of Electricity Generation Problem (also named Generation Problem in this Thesis) where the objective function and the nonlinear constraints are polynomials of degree up to four. In the Generation Problem there is no available d.c. representation of the involved functions and we cannot take advantage of any special structure of the problem either. Hence, a very general problem, such as the above-mentioned, does not seem to have any mathematical structure conducive to computational implementations. Nevertheless, when f(x) is a continuous function and S is a nonempty closed set the problem can be transformed into an equivalent problem expressed by minimize l(z) subject to z 2 D n intC (canonical d.c. program), where l(z) is a convex function (which is usually a linear function) and D and C are closed convex sets. A mathematical complementary convex structure such as D n int C is not always apparent and even when it is explicit, a lot of work still remains to be done to bring it into a form amenable to efficient computational implementations. The attractive feature of the mathematicalcomplementary convex structure is that it involves convexity. Thus, we can use analytical tools from convex analysis like sub differential and supporting hyper plane.On the other hand, since convexity is involved in a reverse sense, these tools must be used in some specific way and combined with combinatorial tools like cutting planes, branch and bound and outer approximation.We introduce the common general mathematical complementary convex structure underlying in global optimization problems and describe the Generation Problem, whose functions are d.c. functions because they are polynomials. Thus, by using the properties of the d.c. functions, we describe the Generation Problem as an equivalent canonical d.c. programming problem. From the structure of its functions the Generation Problem can be rewritten as a more suitable equivalent reverse convex program in order to obtain an adaptation for advantageous numerical implementations.Concepts and properties are introduced which allow us to obtain an explicit representation of a polynomial as a deference of convex polynomials, based on the fact that the set of mth powers of homogeneous polynomials of degree 1 is a generating set for the vector space of homogeneous polynomials of degree m.We also describe a new global optimization algorithm (adapted algorithm) in order to solve the Generation Problem. Since the equivalent reverse convex program is unbounded we use prismatical subdivisions instead of conical ones. Moreover, we prove the convergence of the adapted algorithm by using a prismatical subdivision process together with an outer approximation procedure.We enounce the Minimal Norm Problem by using the concept of Least Deviation Decomposition in order to obtain the optimal d.c. representation of a polynomial function, which allows a more efficient implementation, by reducing the number of iterations of the adapted algorithm.A quadratic semi-infinite algorithm is described. We propose a build-up and down strategy, introduced by Den Hertog (1997) for standard linear programs that uses a logarithmic barrier method.Finally, computational results are given and conclusions are explained.
166

On Optimization in Design of Telecommunications Networks with Multicast and Unicast Traffic

Prytz, Mikael January 2002 (has links)
No description available.
167

Approches multicritères pour le traitement des débris spatiaux

Madakat, D. 16 June 2014 (has links) (PDF)
Les débris spatiaux constituent une menace pour l'exploration et l'exploitation de l'espace. Leur nombre ne cesse d'augmenter et continuera à grandir même si on arrête toute activité spatiale, augmentant ainsi la probabilité d'entrer en collision avec un satellite actif. Le retrait des débris s'avère le seul moyen de protéger ces satellites. Le nombre des débris spatiaux étant très élevé, il convient préalablement de repérer les plus dangereux. Dans la première partie de la thèse, nous avons élaboré une approche multicritère afin de classer les débris selon leur degré de priorité d'enlèvement. Les débris de la classe la plus prioritaire, feront l'objet d'une mission spatiale de retrait de débris. La planification d'une telle mission est étudiée dans la deuxième partie de la thèse. Elle doit être réalisée en minimisant deux critères : le coût de la mission ainsi que la durée nécessaire pour traiter tous les débris. La navette se déplace d'une orbite à une autre, traite les débris un par un puis retourne à son orbite initiale. Etant donné que le transfert entre deux orbites de débris peut être effectué de multiples façons, chacune correspondant à un compromis possible entre la durée et le coût de transfert, et que ces coûts et durées dépendent des moments de départ et d'arrivée sur les orbites, l'ensemble des solutions réalisables est défini sur un multigraphe dynamique orienté. Un tour dans un tel graphe définit un scénario de mission possible. Il s'agit de trouver l'ensemble des tours non dominés dans un tel multigraphe. Ceci revient à résoudre un problème de voyageur de commerce biobjectif et dépendant du temps. Nous avons développé un algorithme basé sur la technique de séparation et évaluation pour restituer l'ensemble de ces tours. L'optimisation de l'algorithme est faite sur deux niveaux : - On limite le nombre des transferts possibles entre deux orbites en évitant de calculer le coût pour les transferts qui s'avéreraient dominés. - Des règles de dominance sont utilisées pour couper certaines branches de l'arborescence de recherche qui ne mèneront pas à des solutions efficaces. Des résultats expérimentaux illustrent l'efficacité de la procédure.
168

Enhance the understanding of whole-genome evolution by designing, accelerating and parallelizing phylogenetic algorithms

Yin, Zhaoming 22 May 2014 (has links)
The advent of new technology enhance the speed and reduce the cost for sequencing biological data. Making biological sense of this genomic data is a big challenge to the algorithm design as well as the high performance computing society. There are many problems in Bioinformatics, such as how new functional genes arise, why genes are organized into chromosomes, how species are connected through the evolutionary tree of life, or why arrangements are subject to change. Phylogenetic analyses have become essential to research on the evolutionary tree of life. It can help us to track the history of species and the relationship between different genes or genomes through millions of years. One of the fundamentals for phylogenetic construction is the computation of distances between genomes. Since there are much more complicated combinatoric patterns in rearrangement events, the distance computation is still a hot topic as much belongs to mathematics as to biology. For the distance computation with input of two genomes containing unequal gene contents (with insertions/deletions and duplications) the problem is especially hard. In this thesis, we will discuss about our contributions to the distance estimation for unequal gene order data. The problem of finding the median of three genomes is the key process in building the most parsimonious phylogenetic trees from genome rearrangement data. For genomes with unequal contents, to the best of our knowledge, there is no algorithm that can help to find the median. In this thesis, we make our contributions to the median computation in two aspects. 1) Algorithm engineering aspect, we harness the power of streaming graph analytics methods to implement an exact DCJ median algorithm which run as fast as the heuristic algorithm and can help construct a better phylogenetic tree. 2) Algorithmic aspect, we theoretically formulate the problem of finding median with input of genomes having unequal gene content, which leads to the design and implementation of an efficient Lin-Kernighan heuristic based median algorithm. Inferring phylogenies (evolutionary history) of a set of given species is the ultimate goal when the distance and median model are chosen. For more than a decade, biologists and computer scientists have studied how to infer phylogenies by the measurement of genome rearrangement events using gene order data. While evolution is not an inherently parsimonious process, maximum parsimony (MP) phylogenetic analysis has been supported by widely applied to the phylogeny inference to study the evolutionary patterns of genome rearrangements. There are generally two problems with the MP phylogenetic arose by genome rearrangement: One is, given a set of modern genomes, how to compute the topologies of the according phylogenetic tree; Another is, given the topology of a model tree, how to infer the gene orders of the ancestor species. To assemble a MP phylogenetic tree constructor, there are multiple NP hard problems involved, unfortunately, they organized as one problem on top of other problems. Which means, to solve a NP hard problem, we need to solve multiple NP hard sub-problems. For phylogenetic tree construction with the input of unequal content genomes, there are three layers of NP hard problems. In this thesis, we will mainly discuss about our contributions to the design and implementation of the software package DCJUC (Phylogeny Inference using DCJ model to cope with Unequal Content Genomes), that can help to achieve both of these two goals. Aside from the biological problems, another issue we need to concern is about the use of the power of parallel computing to assist accelerating algorithms to handle huge data sets, such as the high resolution gene order data. For one thing, all of the method to tackle with phylogenetic problems are based on branch and bound algorithms, which are quite irregular and unfriendly to parallel computing. To parallelize these algorithms, we need to properly enhance the efficiency for localized memory access and load balance methods to make sure that each thread can put their potentials into full play. For the other, there is a revolution taking place in computing with the availability of commodity graphical processors such as Nvidia GPU and with many-core CPUs such as Cray-XMT, or Intel Xeon Phi Coprocessor with 60 cores. These architectures provide a new way for us to achieve high performance at much lower cost. However, code running on these machines are not so easily programmed, and scientific computing is hard to tune well on them. We try to explore the potentials of these architectures to help us accelerate branch and bound based phylogenetic algorithms.
169

An Optimizing Approach For Highway Safety Improvement Programs

Unal, Serter Ziya 01 June 2004 (has links) (PDF)
Improvements to highway safety have become a high priority for highway authorities due to increasing public awareness and concern of the high social and economic costs of accidents. However, satisfying this priority in an environment of limited budgets is difficult. It is therefore important to ensure that the funding available for highway safety improvements is efficiently utilized. In attempt to maximize the overall highway safety benefits, highway professionals usually invoke an optimization process. The objective of this thesis study is to develop a model for the selection of appropriate improvements on a set of black spots which will provide the maximum reduction in the expected number of accidents (total return), subject to the constraint that the amount of money needed for the implementation of these improvements does not exceed the available budget. For this purpose, a computer program, BSAP (Black Spot Analysis Program) is developed. BSAP is comprised of two separate, but integrated programs: the User Interface Program (UIP) and the Main Analysis Program (MAP). The MAP is coded in MATLAB and contains the optimization procedure itself and performs all the necessary calculations by using a Binary Integer Optimization model. The UIP, coded in VISUAL BASIC, was used for monitoring the menu for efficient data preparation and providing a user-friendly environment.
170

A multi-objective stochastic approach to combinatorial technology space exploration

Patel, Chirag B. 18 May 2009 (has links)
Several techniques were studied to select and prioritize technologies for a complex system. Based on the findings, a method called Pareto Optimization and Selection of Technologies (POST) was formulated to efficiently explore the combinatorial technology space. A knapsack problem was selected as a benchmark problem to test-run various algorithms and techniques of POST. A Monte Carlo simulation using the surrogate models was used for uncertainty quantification. The concepts of graph theory were used to model and analyze compatibility constraints among technologies. A probabilistic Pareto optimization, based on the concepts of Strength Pareto Evolutionary Algorithm II (SPEA2), was formulated for Pareto optimization in an uncertain objective space. As a result, multiple Pareto hyper-surfaces were obtained in a multi-dimensional objective space; each hyper-surface representing a specific probability level. These Pareto layers enabled the probabilistic comparison of various non-dominated technology combinations. POST was implemented on a technology exploration problem for a 300 passenger commercial aircraft. The problem had 29 identified technologies with uncertainties in their impacts on the system. The distributions for these uncertainties were defined using beta distributions. Surrogate system models in the form of Response Surface Equations (RSE) were used to map the technology impacts on the system responses. Computational complexity of technology graph was evaluated and it was decided to use evolutionary algorithm for probabilistic Pareto optimization. The dimensionality of the objective space was reduced using a dominance structure preserving approach. Probabilistic Pareto optimization was implemented with reduced number of objectives. Most of the technologies were found to be active on the Pareto layers. These layers were exported to a dynamic visualization environment enabled by a statistical analysis and visualization software called JMP. The technology combinations on these Pareto layers are explored using various visualization tools and one combination is selected. The main outcome of this research is a method based on consistent analytical foundation to create a dynamic tradeoff environment in which decision makers can interactively explore and select technology combinations.

Page generated in 0.04 seconds