Spelling suggestions: "subject:"assignment problem"" "subject:"ssignment problem""
91 |
Branch-and-Price Method for Stochastic Generalized Assignment Problem, Hospital Staff Scheduling Problem and Stochastic Short-Term Personnel Planning ProblemKim, Seon Ki 27 March 2009 (has links)
The work presented in this dissertation has been focused on exploiting the branch-and-price (BNP) method for the solution of various stochastic mixed integer programming problems (MIPs). In particular, we address the stochastic generalized assignment problem (SGAP), a hospital staff scheduling problem (HSSP), a stochastic hospital staff scheduling problem (SHSSP), and a stochastic short-term personnel planning problem (SSTPP). The BNP method has been developed in concert with the dual stabilization technique and other enhancements of this method for each of these problems. In view of an excessive number of scenarios that arise for these problems, we also implement the Monte Carlo method within the BNP scheme. The superiority of the BNP-based method over the branch-and-cut (BNC) method is demonstrated for all of these problems.
The first problem that we address is the SGAP for which the processing time of a job on a machine is assumed to be stochastic. Even though the generalized assignment problem (GAP) has been solved using the BNP method, yet no study has been reported in the literature on the use of the BNP method for the solution of the SGAP. Our work has been motivated by the desire to fill this gap.
We begin by showing that it is better to solve the SGAP as a stochastic program in contrast to solving it by using the expected values of the times required to process the jobs on the machines. Then, we show that the stochastic model of the SGAP is a complete recourse model — a useful property which permits the first stage decisions to produce feasible solutions for the recourse problems. We develop three BNP-based methods for the solution of the SGAP. The first of these is BNP-SGAP, which is a combination of branch-and-bound and column generation methods. The pricing problem of BNP-SGAP is separable with regard to each machine, and it is a multiple-constraint knapsack problem. The second method is BNP-SGAP implemented in concert with the dual stabilization technique (DST), and it is designated as BNPDST-SGAP. We have introduced a new DST by modifying the Boxstep method of Pigatti et al. [76]. We have shown that our method performs better than the method of Pigatti et al. [76] resulting in over two-fold savings in cpu times on average. The third method that we develop for the solution of the SGAP is BNPDST-SGAP implemented with an advanced start to obtain an initial feasible solution. We use a greedy heuristic to obtain this solution, and this heuristic is a modification of a similar method used for the knapsack problem. It relies on the information available at a node of the underlying branch-and-bound tree. We have shown that this procedure obtains an initial feasible solution, if it exists at that node. We designate this method as BNPDSTKP-SGAP. We have also developed a BNC method to solve the SGAP using CPLEX 9.0. We have compared the performances of the BNP and BNC methods on various problem instances obtained by varying the number of machines, the ratio of the number of machines to the number of jobs, the machine capacity, and the penalty cost per unit of extra resource required at each machine. Our results show that all BNP-based methods perform better than the BNC method, with the best performance obtained for BNPDSTKP-SGAP.
An issue with the use of the scenario-based methods that we have employed for the solution of the SGAP is that the number of scenarios generally grows exponentially in problem parameters, which gives rise to a large-size problem. To overcome the complexity caused by the presence of a large number of scenarios for the solution of the SGAP, we introduce the use of the Monte Carlo method (MCM) within the BNP scheme. We designate this method as BNPDSTKP-SGAP with MCM. It affords the use of a small subset of scenarios at a time to estimate the "true" optimal objective function value. Replications of the subsets of scenarios are carried out until the objective function value satisfies a stopping criterion. We have established theoretical results for the use of the MCM. These pertain to determining unbiased estimates of: (i) lower and upper bounds of the "true" optimal objective function value, (ii) the "true" optimal solution, and (iii) the optimality gap. We have also provided the 100(1-ï ¡) confidence interval on the optimality gap. Our experimental investigation has shown the efficacy of using this method. It obtains almost optimal solutions, with the objective function value lying within 5% of the "true" optimal objective function value, while giving almost ten-fold savings in cpu time. Our experimentation has also revealed that an increment in the number of scenarios in each replication makes a greater impact on the quality of the solution obtained than an increment in the number of replications. We have also observed the impact of a change in the variance of a processing time distribution on cpu time. As expected, the optimal objective function value increases with increment in processing time variability. Also, by comparing the results with the expected value solution, it is observed that the greater the variability in the data, the better it is to use the stochastic program.
The second problem that we study is the hospital staff scheduling problem. We address the following three versions of this problem: HSSP (General): Implementation of schedule incorporating the four principal elements, namely, surgeons, operations, operating rooms, and operation times; HSSP (Priority): Inclusion of priority for some surgeons over the other surgeons regarding the use of the facility in HSSP (General); HSSP (Pre-arranged): Implementation of a completely pre-fixed schedule for some surgeons. The consideration of priority among the surgeons mimics the reality. Our BNP method for the solution of these problems is similar to that for the SGAP except for the following: (i) a feasible solution at a node is obtained with no additional assignment, i.e., it consists of the assignments made in the preceding nodes of that node in the branch-and-bound tree; (ii) the columns with positive reduced cost are candidates for augmentation in the CGM; and (iii) a new branching variable selection strategy is introduced, which selects a fractional variable as a branching variable by fixing a value of which we enforce the largest number of variables to either 0 or 1. The priority problem is separable in surgeons.
The results of our experimentation have shown the efficacy of using the BNP-based method for the solution of each HSSP as it takes advantage of the inherent structure of each of these problems. We have also compared their performances with that of the BNC method developed using CPLEX. For the formulations HSSP (General), HSSP (Priority), and HSSP (Pre-arranged), the BNP method gives better results for 22 out of 30, 29 out of 34, and 20 out 32 experiments over the BNC method, respectively. Furthermore, while the BNC method fails to obtain an optimal solution for 15 experiments, the BNP method obtains optimal solutions for all 96 experiments conducted. Thus, the BNP method consistently outperforms the BNC method for all of these problems.
The third problem that we have investigated in this study is the stochastic version of the HSSP, designated as the Stochastic HSSP (SHSSP), in which the operation times are assumed to be stochastic. We have introduced a formulation for this formulation, designated as SHSSP2 (General), which allows for overlapping of schedules for surgeons and operating rooms, and also, allows for an assignment of a surgeon to perform an operation that takes less than a pre-arranged operation time, but all incurring appropriate penalty costs. A comparison of the solution of SHSSP2 (General) and its value with those obtained by using expected values (the corresponding problem is designated as Expected-SHSSP2 (General)) reveals that Expected-SHSSP2 (General) may end up with inferior and infeasible schedules. We show that the recourse model for SHSSP2 (General) is a relatively complete recourse model. Consequently, we use the Monte Carlo method (MCM) to reduce the complexity of solving SHSSP2 (General) by considering fewer scenarios. We employ the branch-and-cut (BNC) method in concert with the MCM for solving SHSSP2 (General). The solution obtained is evaluated using tolerance ratio, closeness to optimality, length of confidence interval, and cpu time. The MCM substantially reduces computational effort while producing almost optimal solutions and small confidence intervals.
We have also considered a special case of SHSSP2 (General), which considers no overlapping schedules for surgeons and operating rooms and assigns exactly the same operation time for each assignment under each scenario, and designate it as SHSSP2 (Special). With this, we consider another formulation that relies on the longest operation time among all scenarios for each assignment of a surgeon to an operation in order to avoid scheduling conflicts, and we designate this problem as SHSSP (Longest). We show SHSSP (Longest) to be equivalent to deterministic HSSP, designated as HSSP (Equivalent), and we further prove it to be equivalent to SHSSP (General) in terms of the optimal objective function value and the optimal assignments of operations to surgeons. The schedule produced by HSSP (Equivalent) does not allow any overlap among the operations performed in an operating room. That is, a new operation cannot be performed if a previous operation scheduled in that room takes longer than expected. However, the schedule generated by HSSP (Equivalent) may turn out to be a conservative one, and may end up with voids due to unused resources in case an operation in an operating room is completed earlier than the longest time allowed. Nevertheless, the schedule is still a feasible one. In such a case, the schedule can be left-shifted, if possible, because the scenarios are now revealed. Moreover, such voids could be used to perform other procedures (e.g., emergency operations) that have not been considered within the scope of the SHSSP addressed here. Besides, such a schedule can provide useful guidelines to plan for resources ahead of time.
The fourth problem that we have addressed in this dissertation is the stochastic short-term personnel planning problem, designated as Stochastic STPP (SSTPP). This problem arises due to the need for finding appropriate temporary contractors (workers) to perform requisite jobs. We incorporate uncertainty in processing time or amount of resource required by a contractor to perform a job. Contrary to the SGAP, the recourse model for this problem is not a relatively complete recourse model. As a result, we cannot employ a MCM method for the solution of this problem as it may give rise to an infeasible solution. The BNP method for the SSTPP employs the DST and the advanced start procedure developed for the SGAP, and due to extra constraints and presence of binary decision variables, we use the branching variable selection strategy developed for the HSSP models. Because of the distinctive properties of the SSTPP, we have introduced a new node selection strategy. We have compared the performances of the BNC-based and BNP-based methods based on the cpu time required. The BNP method outperforms the BNC method in 75% of the experiments conducted, and the BNP method is found to be quite stable with smaller variance in cpu times than those for the BNC method. It affords solution of difficult problems in smaller cpu times than those required for the BNC method. / Ph. D.
|
92 |
Approche efficace pour la conception des architectures multiprocesseurs sur puce électroniqueElie, Etienne 12 1900 (has links)
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures.
Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois.
Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires.
Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce.
Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système. / On-Chip Multiprocessor (OCM) systems are considered to be the best structures to occupy the abundant space available on today integrated circuits (IC). In our thesis, we are interested on an architectural model, called Isometric on-Chip Multiprocessor Architecture (ICMA), that optimizes the OCM systems by focusing on an effective organization of cores (processors and memories) and on methodologies that optimize the use of these architectures.
In the first part of this work, we study the topology of ICMA and propose an architecture that enables efficient and massive use of on-chip memories. ICMA organizes processors and memories in an isometric structure with the objective to get processed data close to the processors that use them rather than to optimize transfers between processors and memories, arranged in a conventional manner. ICMA is a mesh model in three dimensions. The organization of our architecture is inspired by the crystal structure of sodium chloride (NaCl), where each processor can access six different memories and where each memory can communicate with six processors at once.
In the second part of our work, we focus on a methodology of decomposition. This methodology is used to find the optimal number of nodes for a given application or specification. The approach we use is to transform an application or a specification into an incidence matrix, where the entries of this matrix are the interactions between processors and memories as entries. In other words, knowing that the performance of a model depends on the intensity of the data flow exchanged between its units, namely their number, we aim to guarantee a good computing performance by finding the optimal number of processors and memories that are suitable for the application computation. We also consider the load balancing of the units of ICMA during the specification phase of the design. Our proposed decomposition is on three points: the transformation of the specification or application into an incidence matrix, a new methodology based on the Cell Formation Problem (CFP), and load balancing processes in the processors and data in memories.
In the third part, we focus on the allocation of processor and memory by a two-step methodology. Initially, we allocate units to the nodes of the system structure, considered here as an undirected graph, and subsequently we assign values to the arcs of this graph. For the assignment, we propose modeling of the decomposed application using a matrix approach and the Quadratic Assignment Problem (QAP). For the assignment of the values to the arcs, we propose an approach of gradual changes of these values in order to seek the best combination of cost allocation, this under certain metric constraints such as temperature, heat dissipation, power consumption and surface occupied by the chip.
The ultimate goal of this work is to propose a methodology for non-traditional, systematic and effective decision support design tools for multiprocessor system architects, from the phase of functional specification.
|
93 |
Approche efficace pour la conception des architectures multiprocesseurs sur puce électroniqueElie, Etienne 12 1900 (has links)
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures.
Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois.
Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires.
Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce.
Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système. / On-Chip Multiprocessor (OCM) systems are considered to be the best structures to occupy the abundant space available on today integrated circuits (IC). In our thesis, we are interested on an architectural model, called Isometric on-Chip Multiprocessor Architecture (ICMA), that optimizes the OCM systems by focusing on an effective organization of cores (processors and memories) and on methodologies that optimize the use of these architectures.
In the first part of this work, we study the topology of ICMA and propose an architecture that enables efficient and massive use of on-chip memories. ICMA organizes processors and memories in an isometric structure with the objective to get processed data close to the processors that use them rather than to optimize transfers between processors and memories, arranged in a conventional manner. ICMA is a mesh model in three dimensions. The organization of our architecture is inspired by the crystal structure of sodium chloride (NaCl), where each processor can access six different memories and where each memory can communicate with six processors at once.
In the second part of our work, we focus on a methodology of decomposition. This methodology is used to find the optimal number of nodes for a given application or specification. The approach we use is to transform an application or a specification into an incidence matrix, where the entries of this matrix are the interactions between processors and memories as entries. In other words, knowing that the performance of a model depends on the intensity of the data flow exchanged between its units, namely their number, we aim to guarantee a good computing performance by finding the optimal number of processors and memories that are suitable for the application computation. We also consider the load balancing of the units of ICMA during the specification phase of the design. Our proposed decomposition is on three points: the transformation of the specification or application into an incidence matrix, a new methodology based on the Cell Formation Problem (CFP), and load balancing processes in the processors and data in memories.
In the third part, we focus on the allocation of processor and memory by a two-step methodology. Initially, we allocate units to the nodes of the system structure, considered here as an undirected graph, and subsequently we assign values to the arcs of this graph. For the assignment, we propose modeling of the decomposed application using a matrix approach and the Quadratic Assignment Problem (QAP). For the assignment of the values to the arcs, we propose an approach of gradual changes of these values in order to seek the best combination of cost allocation, this under certain metric constraints such as temperature, heat dissipation, power consumption and surface occupied by the chip.
The ultimate goal of this work is to propose a methodology for non-traditional, systematic and effective decision support design tools for multiprocessor system architects, from the phase of functional specification.
|
94 |
Assessing the robustness of genetic codes and genomesSautié Castellanos, Miguel 06 1900 (has links)
Deux approches principales existent pour évaluer la robustesse des codes génétiques et des séquences de codage. L'approche statistique est basée sur des estimations empiriques de probabilité calculées à partir d'échantillons aléatoires de permutations représentant les affectations d'acides aminés aux codons, alors que l'approche basée sur l'optimisation repose sur le pourcentage d’optimisation, généralement calculé en utilisant des métaheuristiques. Nous proposons une méthode basée sur les deux premiers moments de la distribution des valeurs de robustesse pour tous les codes génétiques possibles. En se basant sur une instance polynomiale du Problème d'Affectation Quadratique, nous proposons un algorithme vorace exact pour trouver la valeur minimale de la robustesse génomique. Pour réduire le nombre d'opérations de calcul des scores et de la borne supérieure de Cantelli, nous avons développé des méthodes basées sur la structure de voisinage du code génétique et sur la comparaison par paires des codes génétiques, entre autres. Pour calculer la robustesse des codes génétiques naturels et des génomes procaryotes, nous avons choisi 23 codes génétiques naturels, 235 propriétés d'acides aminés, ainsi que 324 procaryotes thermophiles et 418 procaryotes non thermophiles. Parmi nos résultats, nous avons constaté que bien que le code génétique standard soit plus robuste que la plupart des codes génétiques, certains codes génétiques mitochondriaux et nucléaires sont plus robustes que le code standard aux troisièmes et premières positions des codons, respectivement. Nous avons observé que l'utilisation des codons synonymes tend à être fortement optimisée pour amortir l'impact des changements d'une seule base, principalement chez les procaryotes thermophiles. / There are two main approaches to assess the robustness of genetic codes and coding sequences. The statistical approach is based on empirical estimates of probabilities computed from random samples of permutations representing assignments of amino acids to codons, whereas, the optimization-based approach relies on the optimization percentage frequently computed by using metaheuristics. We propose a method based on the first two moments of the distribution of robustness values for all possible genetic codes. Based on a polynomially solvable instance of the Quadratic Assignment Problem, we propose also an exact greedy algorithm to find the minimum value of the genome robustness. To reduce the number of operations for computing the scores and Cantelli’s upper bound, we developed methods based on the genetic code neighborhood structure and pairwise comparisons between genetic codes, among others. For assessing the robustness of natural genetic codes and genomes, we have chosen 23 natural genetic codes, 235 amino acid properties, as well as 324 thermophilic and 418 non-thermophilic prokaryotes. Among our results, we found that although the standard genetic code is more robust than most genetic codes, some mitochondrial and nuclear genetic codes are more robust than the standard code at the third and first codon positions, respectively. We also observed that the synonymous codon usage tends to be highly optimized to buffer the impact of single-base changes, mainly, in thermophilic prokaryotes.
|
95 |
Inference propojení komponent / Component Interconnection InferenceOlšarová, Nela January 2012 (has links)
The Master Thesis deals with the design of hardware component interconnection inference algorithm that is supposed to be used in the FPGA schema editor that was integrated into educational integrated development environment VLAM IDE. The aim of the algorithm is to support user by finding an optimal interconnection of two given components. The editor and the development environment are implemented as an Eclipse plugin using GMF framework. A brief description of this technologies and the embedded systems design are followed by the design of the inference algorithm. This problem is a topic of combinatorial optimization, related to the bipartite matching and assignment problem. After this, the implementation of the algorithm is described, followed by tests and a summary of achieved results.
|
96 |
[en] AN EXPERIMENTAL INVESTIGATION OF PROBABILITY DISTRIBUTION OF SOLUTION TIME IN GRASP AND ITS APPLICATION ON THE ANALYSIS OF PARALLEL IMPLEMENTATIONS / [pt] UMA INVESTIGAÇÃO EXPERIMENTAL DA DISTRIBUIÇÃO DE PROBABILIDADE DO TEMPO DE SOLUCAO EM HEURISTICAS GRASP: E SUA APLICAÇÃO NA ANALISE DE IMPLEMENTAÇÕES PARALELASRENATA MACHADO AIEX 13 June 2003 (has links)
[pt] GRASP (Greedy Randomized Adaptive Search Procedure)é uma
metaeurística de partidas múltiplas usada para obter
soluções para problemas de otimização combinatória.
Nesse
trabalho. A metaheurística GRASP tem sido usada para
obter
soluções de qualidade para muitos problemas de
otimização
combinatória. Nesse trabalho é proposta uma metodologia
para análise do comportamento da metaheurística GRASP.
Também são propostas estratégias de hibridização com o
religamento de caminhos. Essas estratégias foram
desenvolvidas para o problema de atribuição de três
índices
(AP3) e para o problema de escalonamento de tarefas
conhecido na literatura como job-shop schedulling
problem
(JSP) e são analisadas de acordo com a metodologia
proposta. A metodologia para análise do comportamento do
método GRASP pode ser usada para prever a partir da
versão
seqüencial do algoritmo, como a qualidade da solução do
algoritmo implementado em paralelo irá variar. Os
algoritmos GRASPs desenvolvidos para AP3 e para JSP
foram
paralelizados e os resultados são comparados aos
resultados
obtidos usando a metodologia proposta. / [en] GRASP (Greedy Randomized Adaptive Search Procedure) is a
multi-start metaheuristic for combinatorial optimization
problems. GRASP has been used to find quality solutions of
several combinatorial optimization problems. In this work
we describe a methodology for analysis of GRASP. Hybrid
strategies of GRASP with path relinking are also proposed.
These strategies are studied for the 3-index assignment
problem (AP3) and for the job-shop schedulling problem
(JSP) and are analyzed according to the methodology
proposed. The methodology for analysis of GRASP is used to
predict qualitatively how the quality of the solution
varies in a parallel independent GRASP, using the data of
the GRASP sequential version as input. The GRASPs for the
AP3 and for the JSP are parallelized and the computational
results are compared to the results obtained using the
methodology proposed.
|
97 |
Analysis Design and Implementation of Artificial Intelligence Techniques in Edge Computing EnvironmentsHernández Vicente, Daniel 27 March 2023 (has links)
Tesis por compendio / [ES] Edge Computing es un modelo de computación emergente basado en acercar el procesamiento a los dispositivos de captura de datos en las infraestructuras Internet of things (IoT). Edge computing mejora, entre otras cosas, los tiempos de respuesta, ahorra anchos de banda, incrementa la seguridad de los servicios y oculta las caídas transitorias de la red. Este paradigma actúa en contraposición a la ejecución de servicios en entornos cloud y es muy útil cuando se desea desarrollar soluciones de inteligencia artificial (AI) que aborden problemas en entornos de desastres naturales, como pueden ser inundaciones, incendios u otros eventos derivados del cambio climático. La cobertura de estos escenarios puede resultar especialmente difícil debido a la escasez de infraestructuras disponibles, lo que a menudo impide un análisis de los datos basado en la nube en tiempo real. Por lo tanto, es fundamental habilitar técnicas de IA que no dependan de sistemas de cómputo externos y que puedan ser embebidas en dispositivos de móviles como vehículos aéreos no tripulados (VANT), para que puedan captar y procesar información que permita inferir posibles situaciones de emergencia y determinar así el curso de acción más adecuado de manera autónoma.
Históricamente, se hacía frente a este tipo de problemas utilizando los VANT como dispositivos de recogida de datos con el fin de, posteriormente, enviar esta información a la nube donde se dispone de servidores capacitados para analizar esta ingente cantidad de información. Este nuevo enfoque pretende realizar todo el procesamiento y la obtención de resultados en el VANT o en un dispositivo local complementario. Esta aproximación permite eliminar la dependencia de un centro de cómputo remoto que añade complejidad a la infraestructura y que no es una opción en escenarios específicos, donde las conexiones inalámbricas no cumplen los requisitos de transferencia de datos o son entornos en los que la información tiene que obtenerse en ese preciso momento, por requisitos de seguridad o inmediatez.
Esta tesis doctoral está compuesta de tres propuestas principales. En primer lugar se plantea un sistema de despegue de enjambres de VANTs basado en el algoritmo de Kuhn Munkres que resuelve el problema de asignación en tiempo polinómico. Nuestra evaluación estudia la complejidad de despegue de grandes enjambres y analiza el coste computacional y de calidad de nuestra propuesta. La segunda propuesta es la definición de una secuencia de procesamiento de imágenes de catástrofes naturales tomadas desde drones basada en Deep learning (DL). El objetivo es reducir el número de imágenes que deben procesar los servicios de emergencias en la catástrofe natural para poder tomar acciones sobre el terreno de una manera más rápida. Por último, se utiliza un conjunto de datos de imágenes obtenidas con VANTs y relativas a diferentes inundaciones, en concreto, de la DANA de 2019, cedidas por el Ayuntamiento de San Javier, ejecutando un modelo DL de segmentación semántica que determina automáticamente las regiones más afectadas por las lluvias (zonas inundadas).
Entre los resultados obtenidos se destacan los siguientes: 1- la mejora drástica del rendimiento del despegue vertical coordinado de una red de VANTs. 2- La propuesta de un modelo no supervisado para la vigilancia de zonas desconocidas representa un avance para la exploración autónoma mediante VANTs. Esto permite una visión global de una zona concreta sin realizar un estudio detallado de la misma. 3- Por último, un modelo de segmentación semántica de las zonas inundadas, desplegado para el procesamiento de imágenes en el VANTs, permite la obtención de datos de inundaciones en tiempo real (respetando la privacidad) para una reconstrucción virtual fidedigna del evento.
Esta tesis ofrece una propuesta para mejorar el despegue coordinado de drones y dotar de capacidad de procesamiento de algoritmos de deep learning a dispositivos edge, más concretamente UAVs autónomos. / [CA] Edge Computing és un model de computació emergent basat a acostar el processament als dispositius de captura de dades en les infraestructures Internet of things (IoT). Edge computing millora, entre altres coses, els temps de resposta, estalvia amplades de banda, incrementa la seguretat dels serveis i oculta les caigudes transitòries de la xarxa. Aquest paradigma actua en contraposició a l'execució de serveis en entorns cloud i és molt útil quan es desitja desenvolupar solucions d'intel·ligència artificial (AI) que aborden problemes en entorns de desastres naturals, com poden ser inundacions, incendis o altres esdeveniments derivats del canvi climàtic. La cobertura d'aquests escenaris pot resultar especialment difícil a causa de l'escassetat d'infraestructures disponibles, la qual cosa sovint impedeix una anàlisi de les dades basat en el núvol en temps real. Per tant, és fonamental habilitar tècniques de IA que no depenguen de sistemes de còmput externs i que puguen ser embegudes en dispositius de mòbils com a vehicles aeris no tripulats (VANT), perquè puguen captar i processar informació per a inferir possibles situacions d'emergència i determinar així el curs d'acció més adequat de manera autònoma.
Històricament, es feia front a aquesta mena de problemes utilitzant els VANT com a dispositius de recollida de dades amb la finalitat de, posteriorment, enviar aquesta informació al núvol on es disposa de servidors capacitats per a analitzar aquesta ingent quantitat d'informació. Aquest nou enfocament pretén realitzar tot el processament i l'obtenció de resultats en el VANT o en un dispositiu local complementari. Aquesta aproximació permet eliminar la dependència d'un centre de còmput remot que afig complexitat a la infraestructura i que no és una opció en escenaris específics, on les connexions sense fils no compleixen els requisits de transferència de dades o són entorns en els quals la informació ha d'obtindre's en aqueix precís moment, per requisits de seguretat o immediatesa.
Aquesta tesi doctoral està composta de tres propostes principals. En primer lloc es planteja un sistema d'enlairament d'eixams de VANTs basat en l'algorisme de Kuhn Munkres que resol el problema d'assignació en temps polinòmic. La nostra avaluació estudia la complexitat d'enlairament de grans eixams i analitza el cost computacional i de qualitat de la nostra proposta. La segona proposta és la definició d'una seqüència de processament d'imatges de catàstrofes naturals preses des de drons basada en Deep learning (DL).L'objectiu és reduir el nombre d'imatges que han de processar els serveis d'emergències en la catàstrofe natural per a poder prendre accions sobre el terreny d'una manera més ràpida. Finalment, s'utilitza un conjunt de dades d'imatges obtingudes amb VANTs i relatives a diferents inundacions, en concret, de la DANA de 2019, cedides per l'Ajuntament de San Javier, executant un model DL de segmentació semàntica que determina automàticament les regions més afectades per les pluges (zones inundades).
Entre els resultats obtinguts es destaquen els següents: 1- la millora dràstica del rendiment de l'enlairament vertical coordinat d'una xarxa de VANTs. 2- La proposta d'un model no supervisat per a la vigilància de zones desconegudes representa un avanç per a l'exploració autònoma mitjançant VANTs. Això permet una visió global d'una zona concreta sense realitzar un estudi detallat d'aquesta. 3- Finalment, un model de segmentació semàntica de les zones inundades, desplegat per al processament d'imatges en el VANTs, permet l'obtenció de dades d'inundacions en temps real (respectant la privacitat) per a una reconstrucció virtual fidedigna de l'esdeveniment. / [EN] Edge Computing is an emerging computing model based on bringing data processing and storage closer to the location needed to improve response times and save bandwidth. This new paradigm acts as opposed to running services in cloud environments and is very useful in developing artificial intelligence (AI) solutions that address problems in natural disaster environments, such as floods, fires, or other events of an adverse nature. Coverage of these scenarios can be particularly challenging due to the lack of available infrastructure, which often precludes real-time cloud-based data analysis. Therefore, it is critical to enable AI techniques that do not rely on external computing systems and can be embedded in mobile devices such as unmanned aerial vehicles (UAVs) so that they can capture and process information to understand their context and determine the appropriate course of action independently.
Historically, this problem was addressed by using UAVs as data collection devices to send this information to the cloud, where servers can process it. This new approach aims to do all the processing and get the results on the UAV or a complementary local device. This approach eliminates the dependency on a remote computing center that adds complexity to the infrastructure and is not an option in specific scenarios where wireless connections do not meet the data transfer requirements. It is also an option in environments where the information has to be obtained at that precise moment due to security or immediacy requirements.
This study consists of three main proposals. First, we propose a UAV swarm takeoff system based on the Kuhn Munkres algorithm that solves the assignment problem in polynomial time. Our evaluation studies the takeoff complexity of large swarms and analyzes our proposal's computational and quality cost. The second proposal is the definition of a Deep learning (DL) based image processing sequence for natural disaster images taken from drones to reduce the number of images processed by the first responders in the natural disaster. Finally, a dataset of images obtained with UAVs and related to different floods is used to run a semantic segmentation DL model that automatically determines the regions most affected by the rains (flooded areas).
The results are 1- The drastic improvement of the performance of the coordinated vertical take-off of a network of UAVs. 2- The proposal of an unsupervised model for the surveillance of unknown areas represents a breakthrough for autonomous exploration by UAVs. This allows a global view of a specific area without performing a detailed study. 3- Finally, a semantic segmentation model of flooded areas, deployed for image processing in the UAV, allows obtaining real-time flood data (respecting privacy) for a reliable virtual reconstruction of the event.
This thesis offers a proposal to improve the coordinated take-off of drones, to provide edge devices with deep learning algorithms processing capacity, more specifically autonomous UAVs, in order to develop services for the surveillance of areas affected by natural disasters such as fire detection, segmentation of flooded areas or detection of people in danger. Thanks to this research, services can be developed that enable the coordination of large arrays of drones and allow image processing without needing additional devices. This flexibility makes our approach a bet for the future and thus provides a development path for anyone interested in deploying an autonomous drone-based surveillance and actuation system. / I would like to acknowledge the project Development of
High-Performance IoT Infrastructures against Climate Change based on
Artificial Intelligence (GLOBALoT). Funded by Ministerio de Ciencia e
Innovación (RTC2019-007159-5), of which this thesis is part. / Hernández Vicente, D. (2023). Analysis Design and Implementation of Artificial Intelligence Techniques in Edge Computing Environments [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192605 / Compendio
|
Page generated in 0.0617 seconds