Spelling suggestions: "subject:"integer 1inear programming"" "subject:"integer 1inear erogramming""
221 |
Enfoque multiobjetivo bottom-up para la planificación dinámica de la distribución espacial en plantas industrialesPérez Gosende, Pablo Alberto 06 September 2022 (has links)
[ES] La planificación de la distribución espacial en plantas industriales (FLP) es una de las decisiones más importantes en el contexto de la dirección de operaciones, y uno de los problemas de mayor discusión en la literatura científica enmarcada en el campo amplio de la ingeniería industrial. Sin embargo, el uso generalizado del enfoque de solución top-down tradicional, que se inicia con el diseño de la distribución del conjunto de los departamentos o celdas de trabajo que conforman el sistema de producción y prosigue con la distribución detallada al interior de éstos, parte de asunciones poco compatibles con la realidad operacional industrial que implican ciertas limitaciones para su adopción en la práctica. Esto, unido al hecho de que los modelos matemáticos empleados en la generación de alternativas de layout utilizan en su mayoría el coste de manejo de materiales como una función monoobjetivo de carácter cuantitativo, desvirtuando la naturaleza multiobjetiva del problema, acentúa un vacío que genera oportunidades de mejora en la toma de decisiones de planificación del layout en la práctica industrial. En este contexto, esta tesis doctoral, respaldada por un estudio minucioso del estado del arte y el análisis de modelos de optimización matemática de referencia, presenta un marco conceptual para la toma de decisiones de planificación del FLP desde una perspectiva multiobjetivo, y un nuevo modelo de optimización multiobjetivo no lineal entero mixto (MOMINLP) para facilitar la toma de decisiones de distribución espacial en plantas industriales metalmecánicas en entornos de demanda dinámicos mediante un enfoque de planificación bottom-up, teniendo en cuenta criterios cuantitativos y cualitativos. El modelo propuesto, denominado bottom-up mDFLP, considera tres funciones objetivo que pretenden: (1) minimizar el coste total de manejo de materiales y el coste total de reorganización, (2) maximizar el rating de proximidad subjetiva entre departamentos, y (3) maximizar el ratio de utilización de área. El modelo bottom-up mDFLP ha sido validado en una empresa del sector metalmecánico, confirmando un mejor desempeño en los valores de las funciones objetivo respecto a los obtenidos en la distribución en planta actual. / [CA] La planificació de la distribució espacial en plantes industrials (FLP) és una de les decisions més importants en el context de la direcció d'operacions, i un dels problemes de major discussió en la literatura científica emmarcada en el camp ampli de l'enginyeria industrial. No obstant això, l'ús generalitzat de l'enfocament de solució top-down tradicional, que s'inicia amb el disseny de la distribució del conjunt dels departaments o cel·les de treball que conformen el sistema de producció i prossegueix amb la distribució detallada a l'interior d'aquests, part d'assumpcions poc compatibles amb la realitat operacional industrial que impliquen unes certes limitacions per a la seua adopció en la pràctica. Això, unit al fet que els models matemàtics emprats en la generació d'alternatives de layout utilitzen en la seua majoria el cost de maneig de materials com una funció monoobjetivo de caràcter quantitatiu, desvirtuant la naturalesa multiobjectiva del problema, accentua un buit que genera oportunitats de millora en la presa de decisions de planificació del layout en la pràctica industrial. En aquest context, aquesta tesi doctoral, recolzada per un estudi minuciós de l'estat de l'art i l'anàlisi de models d'optimització matemàtica de referència, presenta un marc conceptual per a la presa de decisions de planificació del FLP des d'una perspectiva multiobjectiu, i un nou model d'optimització multiobjectiu no lineal enter mixt (MOMINLP) per a facilitar la presa de decisions de distribució espacial en plantes industrials metallmecàniques en entorns de demanda dinàmics mitjançant un enfocament de planificació bottom-up, tenint en compte criteris quantitatius i qualitatius. El model proposat, denominat bottom-up mDFLP, considera tres funcions objectiu que pretenen: (1) minimitzar el cost total de maneig de materials i el cost total de reorganització, (2) maximitzar el rating de proximitat subjectiva entre departaments, i (3) maximitzar el ràtio d'utilització d'àrea. El model bottom-up mDFLP ha sigut validat en una empresa del sector metallmecànic, confirmant un millor acompliment en els valors de les funcions objectiu respecte als obtinguts en la distribució en planta actual. / [EN] Facility layout planning (FLP) is one of the most critical decisions in operations management and one of the most discussed problems in the scientific literature framed in the broad field of industrial engineering. However, the widespread use of the traditional top-down solution approach, which starts with a block layout design phase and continues with the detailed layout within each work cell making up the production system, is based on assumptions that are not very compatible with the industrial operational reality, which implies certain limitations for its adoption in practice. This issue, together with the fact that the mathematical models used in the generation of layout alternatives mostly use the cost of material handling as a single objective function of a quantitative nature, distorting the multi-objective nature of the problem, accentuates a gap that generates opportunities for improvement in the FLP decision making process in industrial practice. In this context, this doctoral thesis, supported by a thorough study of state of the art and the analysis of benchmark mathematical optimisation models, presents a conceptual framework for FLP planning decision making from a multi-objective perspective and also a new multi-objective mixed-integer non-linear optimisation model (MOMINLP) to facilitate FLP decision making for metal-mechanical industrial plants in dynamic demand environments through a bottom-up planning approach, taking into account quantitative and qualitative criteria. The proposed model, called bottom-up mDFLP, considers three objective functions that aim to: (1) minimising the total material handling cost and the total rearrangement cost, (2) maximising the subjective closeness rating between departments, (3) maximising the area utilisation ratio. The bottom-up mDFLP model has been validated in a company from the metal-mechanical sector, confirming a better performance in the values of the objective functions than those obtained in the current plant layout. / Pérez Gosende, PA. (2022). Enfoque multiobjetivo bottom-up para la planificación dinámica de la distribución espacial en plantas industriales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/185800
|
222 |
The Distributed and Assembly Scheduling ProblemHatami, Sara 16 May 2016 (has links)
Tesis por compendio / [EN] Nowadays, manufacturing systems meet different new global challenges and
the existence of a collaborative manufacturing environment is essential to face
with. Distributed manufacturing and assembly systems are two manufacturing
systems which allow industries to deal with some of these challenges. This
thesis studies a production problem in which both distributed manufacturing
and assembly systems are considered. Although distributed manufacturing
systems and assembly systems are well-known problems and have been extensively
studied in the literature, to the best of our knowledge, considering
these two systems together as in this thesis is the first effort in the literature.
Due to the importance of scheduling optimization on production performance,
some different ways to optimize the scheduling of the considered problem are
discussed in this thesis.
The studied scheduling setting consists of two stages: A production and an
assembly stage. Various production centers make the first stage. Each of these
centers consists of several machines which are dedicated to manufacture jobs.
A single assembly machine is considered for the second stage. The produced
jobs are assembled on the assembly machine to form final products through a
defined assembly program.
In this thesis, two different problems regarding two different production
configurations for the production centers of the first stage are considered.
The first configuration is a flowshop that results in what we refer to as the
Distributed Assembly Permutation Flowshop Scheduling Problem (DAPFSP).
The second problem is referred to as the Distributed Parallel Machine and
Assembly Scheduling Problem (DPMASP), where unrelated parallel machines
configure the production centers. Makespan minimization of the product on the
assembly machine located in the assembly stage is considered as the objective
function for all considered problems.
In this thesis some extensions are considered for the studied problems
so as to bring them as close as possible to the reality of production shops.
In the DAPFSP, sequence dependent setup times are added for machines in
both production and assembly stages. Similarly, in the DPMASP, due to
technological constraints, some defined jobs can be processed only in certain
factories.
Mathematical models are presented as an exact solution for some of the
presented problems and two state-of-art solvers, CPLEX and GUROBI are
used to solve them. Since these solvers are not able to solve large sized
problems, we design and develop heuristic methods to solve the problems. In
addition to heuristics, some metaheuristics are also designed and proposed to
improve the solutions obtained by heuristics. Finally, for each proposed problem,
the performance of the proposed solution methods is compared through
extensive computational and comprehensive ANOVA statistical analysis. / [ES] Los sistemas de producción se enfrentan a retos globales en los que el concepto
de fabricación colaborativa es crucial para poder tener éxito en el entorno
cambiante y complejo en el que nos encontramos. Una característica de los sistemas
productivos que puede ayudar a lograr este objetivo consiste en disponer
de una red de fabricación distribuida en la que los productos se fabriquen en
localizaciones diferentes y se vayan ensamblando para obtener el producto
final. En estos casos, disponer de modelos y herramientas para mejorar el
rendimiento de sistemas de producción distribuidos con ensamblajes es una
manera de asegurar la eficiencia de los mismos.
En esta tesis doctoral se estudian los sistemas de fabricación distribuidos
con operaciones de ensamblaje. Los sistemas distribuidos y los sistemas con
operaciones de ensamblaje han sido estudiados por separado en la literatura.
De hecho, no se han encontrado estudios de sistemas con ambas características
consideradas de forma conjunta.
Dada la complejidad de considerar conjuntamente ambos tipos de sistemas
a la hora de realizar la programación de la producción en los mismos, se ha
abordado su estudio considerando un modelo bietápico en la que en la primera
etapa se consideran las operaciones de producción y en la segunda se plantean
las operaciones de ensamblaje.
Dependiendo de la configuración de la primera etapa se han estudiado dos
variantes. En la primera variante se asume que la etapa de producción está
compuesta por sendos sistemas tipo flowshop en los que se fabrican los componentes
que se ensamblan en la segunda etapa (Distributed Assembly Permutation
Flowshop Scheduling Problem o DAPFSP). En la segunda variante
se considera un sistema de máquinas en paralelo no relacionadas (Distributed
Parallel Machine and Assembly Scheduling Problem o DPMASP). En ambas
variantes se optimiza la fecha de finalización del último trabajo secuenciado
(Cmax) y se contempla la posibilidad que existan tiempos de cambio (setup)
dependientes de la secuencia de trabajos fabricada. También, en el caso
DPMASP se estudia la posibilidad de prohibir o no el uso de determinadas
máquinas de la etapa de producción.
Se han desarrollado modelos matemáticos para resolver algunas de las
variantes anteriores. Estos modelos se han resuelto mediante los programas
CPLEX y GUROBI en aquellos casos que ha sido posible. Para las instancias
en los que el modelo matemático no ofrecía una solución al problema se han
desarrollado heurísticas y metaheurísticas para ello.
Todos los procedimientos anteriores han sido estudiados para determinar
el rendimiento de los diferentes algoritmos planteados. Para ello se ha realizado
un exhaustivo estudio computacional en el que se han aplicado técnicas
ANOVA.
Los resultados obtenidos en la tesis permiten avanzar en la comprensión
del comportamiento de los sistemas productivos distribuidos con ensamblajes,
definiendo algoritmos que permiten obtener buenas soluciones a este tipo de
problemas tan complejos que aparecen tantas veces en la realidad industrial. / [CA] Els sistemes de producció s'enfronten a reptes globals en què el concepte de
fabricació col.laborativa és crucial per a poder tindre èxit en l'entorn canviant
i complex en què ens trobem. Una característica dels sistemes productius
que pot ajudar a aconseguir este objectiu consistix a disposar d'una xarxa de
fabricació distribuïda en la que els productes es fabriquen en localitzacions
diferents i es vagen acoblant per a obtindre el producte final. En estos casos,
disposar de models i ferramentes per a millorar el rendiment de sistemes de
producció distribuïts amb acoblaments és una manera d'assegurar l'eficiència
dels mateixos.
En esta tesi doctoral s'estudien els sistemes de fabricació distribuïts amb
operacions d'acoblament. Els sistemes distribuïts i els sistemes amb operacions
d'acoblament han sigut estudiats per separat en la literatura però, en allò
que es coneix, no s'han trobat estudis de sistemes amb ambdós característiques
conjuntament. Donada la complexitat de considerar conjuntament ambdós
tipus de sistemes a l'hora de realitzar la programació de la producció en els
mateixos, s'ha abordat el seu estudi considerant un model bietàpic en la que
en la primera etapa es consideren les operacions de producció i en la segona es
plantegen les operacions d'acoblament.
Depenent de la configuració de la primera etapa s'han estudiat dos variants.
En la primera variant s'assumix que l'etapa de producció està composta per
sengles sistemes tipus flowshop en els que es fabriquen els components que
s'acoblen en la segona etapa (Distributed Assembly Permutation Flowshop
Scheduling Problem o DAPFSP). En la segona variant es considera un sistema
de màquines en paral.lel no relacionades (Distributed Parallel Machine and
Assembly Scheduling Problem o DPMASP). En ambdós variants s'optimitza
la data de finalització de l'últim treball seqüenciat (Cmax) i es contempla la
possibilitat que existisquen temps de canvi (setup) dependents de la seqüència
de treballs fabricada. També, en el cas DPMASP s'estudia la possibilitat de
prohibir o no l'ús de determinades màquines de l'etapa de producció.
S'han desenvolupat models matemàtics per a resoldre algunes de les variants
anteriors. Estos models s'han resolt per mitjà dels programes CPLEX
i GUROBI en aquells casos que ha sigut possible. Per a les instàncies en
què el model matemàtic no oferia una solució al problema s'han desenrotllat
heurístiques i metaheurísticas per a això. Tots els procediments anteriors han
sigut estudiats per a determinar el rendiment dels diferents algoritmes plantejats.
Per a això s'ha realitzat un exhaustiu estudi computacional en què s'han
aplicat tècniques ANOVA.
Els resultats obtinguts en la tesi permeten avançar en la comprensió del
comportament dels sistemes productius distribuïts amb acoblaments, definint
algoritmes que permeten obtindre bones solucions a este tipus de problemes
tan complexos que apareixen tantes vegades en la realitat industrial. / Hatami, S. (2016). The Distributed and Assembly Scheduling Problem [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/64072 / Compendio
|
223 |
Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionalesAceituno Peinado, José María 28 March 2024 (has links)
Tesis por compendio / [ES] Los sistemas de tiempo real se caracterizan por exigir el cumplimento de unos requisitos temporales que garanticen el funcionamiento aceptable de un sistema. Especialmente, en los sistemas de tiempo real estricto estos requisitos temporales deben ser inviolables. Estos sistemas suelen aplicarse en áreas como la aviación, la seguridad ferroviaria, satélites y control de procesos, entre otros. Por tanto, el incumplimiento de un requisito temporal en un sistema de tiempo real estricto puede ocasionar un fallo catastrófico.
La planificación de sistemas de tiempo real es una área en la que se estudian y aplican diversas metodologías, heurísticas y algoritmos que intentan asignar el recurso de la CPU sin pérdidas de plazo.
El uso de sistemas de computación multinúcleo es una opción cada vez más recurrente en los sistemas de tiempo real estrictos. Esto se debe, entre otras causas, a su alto rendimiento a nivel de computación gracias a su capacidad de ejecutar varios procesos en paralelo.
Por otro lado, los sistemas multinúcleo presentan un nuevo problema, la contención que ocurre debido a la compartición de los recursos de hardware. El origen de esta contención es la interferencia que en ocasiones ocurre entre tareas asignadas en distintos núcleos que pretenden acceder al mismo recurso compartido simultáneamente, típicamente acceso a memoria compartida. Esta interferencia añadida puede suponer un incumplimiento de los requisitos temporales, y por tanto, la planificación no sería viable.
En este trabajo se proponen nuevas metodologías y estrategias de planificación no convencionales para aportar soluciones al problema de la interferencia en sistemas multinúcleo. Estas metodologías y estrategias abarcan algoritmos de planificación, algoritmos de asignación de tareas a núcleos, modelos temporales y análisis de planificabilidad.
El resultado del trabajo realizado se ha publicado en diversos artículos en revistas del área. En ellos se presentan estas nuevas propuestas que afrontan los retos de la planificación de tareas. En la mayoría de los artículos presentados la estructura es similar: se introduce el contexto en el que nos situamos, se plantea la problemática existente, se expone una propuesta para solventar o mejorar los resultados de la planificación, después se realiza una experimentación para evaluar de forma práctica la metodología propuesta, se analizan los resultados obtenidos y finalmente se exponen unas conclusiones sobre la propuesta.
Los resultados de las metodologías no convencionales propuestas en los artículos que conforman esta tesis muestran una mejora del rendimiento de las planificaciones en comparación con algoritmos clásicos del área. Especialmente la mejora se produce en términos de disminución de la interferencia producida y mejora de la tasa de planificabilidad. / [CA] Els sistemes de temps real es caracteritzen per exigir el compliment d'uns requisits temporals que garantisquen el funcionament acceptable d'un sistema. Especialment, en els sistemes de temps real estricte aquests requisits temporals han de ser inviolables. Aquests sistemes solen aplicar-se en àrees com l'aviació, la seguretat ferroviària, satèl·lits i control de processos, entre altres. Per tant, l'incompliment d'un requisit temporal en un sistema de temps real estricte pot ocasionar un error catastròfic.
La planificació de sistemes de temps real és una àrea en la qual s'estudien i apliquen diverses metodologies, heurístiques i algorismes que intenten assignar el recurs de la CPU sense pèrdues de termini.
L'ús de sistemes de computació multinucli és una opció cada vegada més recurrent en els sistemes de temps real estrictes. Això es deu, entre altres causes, al seu alt rendiment a nivell de computació gràcies a la seua capacitat d'executar diversos processos en paral·lel.
D'altra banda, els sistemes multinucli presenten un nou problema, la contenció que ocorre a causa de la compartició dels recursos de hardware. L'origen d'aquesta contenció és la interferència que a vegades ocorre entre tasques assignades en diferents nuclis que pretenen accedir al mateix recurs compartit simultàniament, típicament accés a memòria compartida. Aquesta interferència afegida pot suposar un incompliment dels requisits temporals, i per tant, la planificació no seria viable.
En aquest treball es proposen noves metodologies i estratègies de planificació no convencionals per aportar solucions al problema de la interferència en sistemes multinucli. Aquestes metodologies i estratègies comprenen algorismes de planificació, algorismes d'assignació de tasques a nuclis, models temporals i anàlisis de planificabilitat.
El resultat del treball realitzat s'ha publicat en diversos articles en revistes de l'àrea. En ells es presenten aquestes noves propostes que afronten els reptes de la planificació de tasques. En la majoria dels articles presentats l'estructura és similar: s'introdueix el context en el qual ens situem, es planteja la problemàtica existent, s'exposa una proposta per a solucionar o millorar els resultats de la planificació, després es realitza una experimentació per a avaluar de manera pràctica la metodologia proposada, s'analitzen els resultats obtinguts i finalment s'exposen unes conclusions sobre la proposta.
Els resultats de les metodologies no convencionals proposades en els articles que conformen aquesta tesi mostren una millora del rendiment de les planificacions en comparació amb algorismes clàssics de l'àrea. Especialment, la millora es produeix en termes de disminució de la interferència produïda i millora de la taxa de planificabilitat. / [EN] Real-time systems are characterised by the demand for temporal constraints that guarantee acceptable operation and feasibility of a system. Especially, in hard real-time systems these temporal constraints must be respected. These systems are typically applied in areas such as aviation, railway safety, satellites and process control, among others. Therefore, a missed deadline in a hard-real time system can lead to a catastrophic failure.
The scheduling of real-time systems is an area where various methodologies, heuristics and algorithms are studied and applied in an attempt to allocate the CPU resources without missing any deadline.
The use of multicore computing systems is an increasingly recurrent option in hard real-time systems. This is due, among other reasons, to its high computational performance thanks to the ability to run multiple processes in parallel.
On the other hand, multicore systems present a new problem, the contention that occurs due to the sharing of hardware resources. The source of this contention is the interference that sometimes happens between tasks allocated in different cores that try to access the same shared resource simultaneously, typically shared memory access. This added interference can lead to miss a deadline, and therefore, the scheduling would not be feasible.
This paper proposes new non-conventional scheduling methodologies and strategies to provide solutions to the interference problem in multicore systems. These methodologies and strategies include scheduling algorithms, task allocation algorithms, temporal models and schedulability analysis.
The results of this work have been published in several journal articles in the field. In these articles the new proposals are presented, they face the challenges of task scheduling. In the majority of these articles the structure is similar: the context is introduced, the existing problem is identified, a proposal to solve or improve the results of the scheduling is presented, then the proposed methodology is experimented in order to evaluate it in practical terms, the results obtained are analysed and finally conclusions about the proposal are expressed.
The results of the non-conventional methodologies proposed in the articles that comprise this thesis show an improvement in the performance of the scheduling compared to classical algorithms in the area. In particular, the improvement is produced in terms of reducing the interference and a higher schedulability rate. / Esta tesis se ha realizado en el marco de dos proyectos de investigación de carácter nacional. Uno
de ellos es el proyecto es PRECON-I4. Consiste en la búsqueda de sistemas informáticos predecibles y confiables para la industria 4.0. El otro proyecto es PRESECREL, que consiste en la
búsqueda de modelos y plataformas para sistemas informáticos industriales predecibles, seguros
y confiables. Tanto PRECON-I4 como PRESECREL son proyectos coordinados financiados por
el Ministerio de Ciencia, Innovación y Universidades y los fondos FEDER (AEI/FEDER, UE).
En ambos proyectos participa la Universidad Politécnica de Valencia, la Universidad de Cantabria y la Universidad Politécnica de Madrid. Además, en PRESECREL también participa
IKERLAN S. COOP I.P. Además, parte de los resultados de esta tesis también han servido
para validar la asignación de recursos temporales en sistemas críticos en el marco del proyecto
METROPOLIS (PLEC2021-007609). / Aceituno Peinado, JM. (2024). Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/203212 / Compendio
|
224 |
Techniques d'analyse et d'optimisation pour la synthèse architecturale de systèmes temps réel embarqués distribués : problèmes de placement, de partitionnement et d'ordonnancement / Analysis and optimization techniques for the architectural synthesis of real time embedded and distributed systemsMehiaoui, Asma 16 June 2014 (has links)
Dans le cadre industriel et académique, les méthodologies de développement logiciel exploitent de plus en plus le concept de “modèle” afin d’appréhender la complexité des systèmes temps réel critiques. En particulier, celles-ci définissent une étape dans laquelle un modèle fonctionnel, conçu comme un graphe de blocs fonctionnels communiquant via des échanges de signaux de données, est déployé sur un modèle de plateforme d’exécution matérielle et un modèle de plateforme d’exécution logicielle composé de tâches et de messages. Cette étape appelée étape de déploiement, permet d’établir une architecture opérationnelle du système nécessitant une validation des propriétés temporelles du système. Dans le contexte des systèmes temps réel dirigés par les évènements, la vérification des propriétés temporelles est réalisée à l’aide de l’analyse d’ordonnançabilité basée sur l’analyse des temps de réponse. Chaque choix de déploiement effectué a un impact essentiel sur la validité et la qualité du système. Néanmoins, les méthodologies existantes n’offrent pas de support permettant de guider le concepteur d’applications durant l’exploration de l’espace des architectures possibles. L’objectif de ces travaux de thèse consiste à mettre en place des techniques d’analyse et de synthèse automatiques permettant de guider le concepteur vers une architecture opérationnelle valide et optimisée par rapport aux performances du système. Notre proposition est dédiée à l’exploration de l’espace des architectures en tenant compte à la fois des quatre degrés de liberté déterminés durant la phase de déploiement, à savoir (j) le placement des éléments fonctionnels sur les éléments de calcul et de communication de la plateforme d’exécution, (ii) le partitionnement des éléments fonctionnels en tâches temps réel et des signaux de données en messages, (iii) l’affectation de priorités d’exécution aux tâches et aux messages du système et (iv) l’attribution du mécanisme de protection des données partagées pour les systèmes temps réel périodiques. Nous nous intéressons principalement à la satisfaction des contraintes temporelles et celles liées aux capacités des ressources de la plateforme cible. De plus, nous considérons l’optimisation des latences de bout-en-bout et la consommation mémoire. Les approches d’exploration architecturale présentées dans cette thèse sont basées sur la technique d’optimisation PLNE (programmation linéaire en nombres entiers) et concernent à la fois les applications activées périodiquement et celles dont l’activation est pilotée par les données. Contrairement à de nombreuses approches antérieures fournissant une solution partielle au problème de déploiement, les méthodes proposées considèrent l’ensemble du problème de déploiement. Les approches proposées dans cette thèse sont évaluées à l’aide d’applications génériques et industrielles. / Modern development methodologies from the industry and the academia exploit more and more the ”model” concept to address the complexity of critical real-time systems. These methodologies define a key stage in which the functional model, designed as a network of function blocks communicating through exchanged data signals, is deployed onto a hardware execution platform model and implemented in a software model consisting of a set of tasks and messages. This stage so-called deployment stage allows establishment of an operational architecture of the system, thus it requires evaluation and validation of the temporal properties of the system. In the context of event-driven real-time systems, the verification of temporal properties is performed using the schedulability analysis based on the response time analysis. Each deployment choice has an essential impact on the validity and the quality of the system. However, the existing methodologies do not provide supportto guide the designer of applications in the exploration of the operational architectures space. The objective of this thesis is to develop techniques for analysis and automatic synthesis of a valid operational architecture optimized with respect to the system performances. Our proposition is dedicated to the exploration of architectures space considering at the same time the four degrees of freedom determined during the deployment phase, (i) the placement of functional elements on the computing and communication resources of the execution platform, (ii) the partitioning of function elements into real time tasks and data signals into messages, (iii) the priority assignment to system tasks and messages and (iv) the assignment of shared data protection mechanism for periodic real-time systems. We are mainly interested in meeting temporal constraints and memory capacity of the target platform. In addition, we are focusing on the optimization of end-to-end latency and memory consumption. The design space exploration approaches presented in this thesis are based on the MILP (Mixed Integer Linear programming) optimization technique and concern at the same time time-driven and data-driven applications. Unlike many earlier approaches providing a partial solution to the deployment problem, our methods consider the whole deployment problem. The proposed approaches in this thesis are evaluated using both synthetic and industrial applications.
|
225 |
Gestion optimisée d'un modèle d'agrégation de flexibilités diffuses / Optimized management of a distributed demand response aggregation modelPrelle, Thomas 22 September 2014 (has links)
Le souhait d’augmenter la part des énergies renouvelables dans le mix énergétique entraine une augmentation des parts des énergies volatiles et non pilotables, et rend donc l’équilibre offre-demande difficile à satisfaire. Une façon d’intégrer ces énergies dans le réseau électrique actuel est d’utiliser de petits moyens de production, de consommation et de stockage répartis sur tout le territoire pour compenser les sous ou sur productions. Afin que ces procédés puissent être intégrés dans le processus d’équilibre offre-demande, ils sont regroupés au sein d’une centrale virtuelle d’agrégation de flexibilité, qui est vue alors comme une centrale virtuelle. Comme pour tout autre moyen de production du réseau, il est nécessaire de déterminer son plan de production. Nous proposons dans un premier temps dans cette thèse une architecture et un mode de gestion pour une centrale d’agrégation composée de n’importe quel type de procédés. Dans un second temps, nous présentons des algorithmes permettant de calculer le plan de production des différents types de procédés respectant toutes leurs contraintes de fonctionnement. Et enfin, nous proposons des approches pour calculer le plan de production de la centrale d’agrégation dans le but de maximiser son gain financier en respectant les contraintes réseau. / The desire to increase the share of renewable energies in the energy mix leads to an increase inshare of volatile and non-controllable energy and makes it difficult to meet the supply-demand balance. A solution to manage anyway theses energies in the current electrical grid is to deploy new energy storage and demand response systems across the country to counter balance under or over production. In order to integrate all these energies systems to the supply and demand balance process, there are gathered together within a virtual flexibility aggregation power plant which is then seen as a virtual power plant. As for any other power plant, it is necessary to compute its production plan. Firstly, we propose in this PhD thesis an architecture and management method for an aggregation power plant composed of any type of energies systems. Then, we propose algorithms to compute the production plan of any types of energy systems satisfying all theirs constraints. Finally, we propose an approach to compute the production plan of the aggregation power plant in order to maximize its financial profit while complying with all the constraints of the grid.
|
226 |
Meta-heurísticas Iterated Local Search, GRASP e Artificial Bee Colony aplicadas ao Job Shop Flexível para minimização do atraso total. / Meta-heuristics Iterated Local Search, GRASP and Artificial Bee Colony applied to Flexible Job Shop minimizing total tardiness.Melo, Everton Luiz de 07 February 2014 (has links)
O ambiente de produção abordado neste trabalho é o Job Shop Flexível (JSF), uma generalização do Job Shop (JS). O problema de programação de tarefas, ou jobs, no ambiente JS é classificado por Garey; Johnson e Sethi (1976) como NP-Difícil e o JSF é, no mínimo, tão difícil quanto o JS. O JSF é composto por um conjunto de jobs, cada qual constituído por operações. Cada operação deve ser processada individualmente, sem interrupção, em uma única máquina de um subconjunto de máquinas habilitadas. O principal critério de desempenho considerado é a minimização dos atrasos dos jobs. São apresentados modelos de Programação Linear Inteira Mista (PLIM) para minimizar o atraso total e o instante de término da última operação, o makespan. São propostas novas regras de prioridade dos jobs, além de adaptações de regras da literatura. Tais regras são utilizadas por heurísticas construtivas e são aliadas a estratégias cujo objetivo é explorar características específicas do JSF. Visando aprimorar as soluções inicialmente obtidas, são propostas buscas locais e outros mecanismos de melhoria utilizados no desenvolvimento de três meta-heurísticas de diferentes categorias. Essas meta-heurísticas são: Iterated Local Search (ILS), classificada como meta-heurística de trajetória; Greedy Randomized Adaptive Search (GRASP), meta-heurística construtiva; e Artificial Bee Colony (ABC), meta-heurística populacional recentemente proposta. Esses métodos foram selecionados por alcançarem bons resultados para diversos problemas de otimização da literatura. São realizados experimentos computacionais com 600 instâncias do JSF, permitindo comparações entre os métodos de resolução. Os resultados mostram que explorar as características do problema permite que uma das regras de prioridade propostas supere a melhor regra da literatura em 81% das instâncias. As meta-heurísticas ILS, GRASP e ABC chegam a conseguir mais de 31% de melhoria sobre as soluções iniciais e a obter atrasos, em média, somente 2,24% superiores aos das soluções ótimas. Também são propostas modificações nas meta-heurísticas que permitem obter melhorias ainda mais expressivas sem aumento do tempo de execução. Adicionalmente é estudada uma versão do JSF com operações de Montagem e Desmontagem (JSFMD) e os experimentos realizados com um conjunto de 150 instâncias também indicam o bom desempenho dos métodos desenvolvidos. / The production environment addressed herein is the Flexible Job Shop (FJS), a generalization of the Job Shop (JS). In the JS environment, the jobs scheduling problem is classified by Garey; Johnson and Sethi (1976) as NP-Hard and the FJS is at least as difficult as the JS. FJS is composed of a set of jobs, each consisting of operations. Each operation must be processed individually, without interruption, in a single machine of a subset of enabled machines. The main performance criterion is minimizing the jobs tardiness. Mixed Integer Linear Programming (MILP) models are presented. These models minimize the total tardiness and the completion time of the last operation, makespan. New priority rules of jobs are proposed, as well as adaptations of rules from the literature. These rules are used by constructive heuristics and are combined with strategies aimed at exploiting specific characteristics of FSJ. In order to improve the solutions initially obtained, local searches and other improvement mechanisms are proposed and used in the development of metaheuristics of three different categories. These metaheuristics are: Iterated Local Search (ILS), classified as trajectory metaheuristic; Greedy Randomized Adaptive Search (GRASP), constructive metaheuristic, and Artificial Bee Colony (ABC), recently proposed population metaheuristic. These methods were selected owing to their good results for various optimization problems in the literature. Computational experiments using 600 FJS instances are carried out to allow comparisons between the resolution methods. The results show that exploiting the characteristics of the problem allows one of the proposed priority rules to exceed the best literature rule in about 81% of instances. Metaheuristics ILS, GRASP and ABC achieve more than 31% improvement over the initial solutions and obtain an average tardiness only 2.24% higher than the optimal solutions. Modifications in metaheuristics are proposed to obtain even more significant improvements without increased execution time. Additionally, a version called Disassembly and Assembly FSJ (DAFJS) is studied and the experiments performed with a set of 150 instances also indicate good performance of the methods developed.
|
227 |
Proposta de um modelo de planejamento agregado da produção numa usina de açúcar e álcool vinculado à flutuação de preços em mercados à vista e no mercado futuro. / A model of aggregate production planning in a sugar mill and alcohol linked the decisions of prices in future markets and present markets.Carvalho, Marcelo Dias 09 November 2009 (has links)
O objetivo de estudo desta dissertação é o desenvolvimento de um modelo de planejamento agregado da produção que apóie as decisões de nível gerencial e de diretoria das usinas de açúcar e álcool no que tange às variedades de cana colhidas em cada semana, às compras de cana-de-açúcar de terceiros, ao tipo de transporte (próprio ou terceirizado) a se utilizar em cada semana, ao total de cana moída por semana para atendimento da demanda e aos processos (industrial e comercial) que se devem escolher para produzir e comercializar açúcar e álcool. As decisões devem ocorrer em função de preços nos mercados interno, externo e mercado futuro, do fluxo de caixa da empresa, da capacidade da usina para armazenar açúcar e álcool e da possibilidade de uso de estoque de terceiros. As decisões por compra de cana, escolha de processos e venda de produtos são tomadas semanalmente num horizonte móvel de planejamento de 52 semanas, que inclui o tempo de safra no centro-sul do Brasil (meados de março a meados de dezembro, aproximadamente 36 semanas) mais o período de entressafra (aproximadamente 16 semanas, de meados de dezembro a meados de março). A procura por melhores estratégias de comercialização de tal forma a auxiliar a tomada de decisões é uma necessidade constante dos empresários do setor, que muitas vezes são surpreendidos pelas variações de preços de açúcar e álcool no mercado interno, externo e mercado futuro. Na parte comercial, este trabalho utiliza o método Delphi de previsão de preços de açúcar e álcool que balizam as tomadas de decisão no planejamento e controle da produção das usinas de açúcar e álcool. Define-se Hedge como a operação financeira de proteger determinado ativo de uma empresa contra variações inesperadas de preços. Neste trabalho, utiliza-se um modelo de escolha de mix de produto para Hedge vinculado à lucratividade e minimização de risco denominado Modelo de Semi- Variância com análise de cenários de Markowitz. Nas decisões relacionadas com as partes agrícola, industrial e comercial, faz-se uso de um modelo de programação linear inteira mista e para resolvê-lo utiliza-se o software de programação matemática LINGO e suas interfaces com a planilha eletrônica Excel. Nas decisões vinculadas ao mix ótimo para o Hedge em cada semana, faz-se uso de um modelo de programação quadrática resolvido pelo LINGO e suas interfaces com a planilha eletrônica Excel. Um estudo de caso foi realizado numa usina de açúcar e álcool no município de Junqueirópolis (SP) para validar o modelo proposto. / The objective of study this dissertation is to develop a model of aggregate production planning to support the decisions of management and board level of sugar and alcohol plants in regard to varieties of cane harvested each week, purchasing cane of nonsugar, the type of transport (own or outsourced) to use each week, the total cane processed per week for taking care of the demand and processes (industrial and commercial) and that must be chosen to produce and sell sugar and alcohol. Decisions must occur in terms of domestic, foreign and future market prices, the company\'s cash flow and the capacity to store sugar and alcohol and the possibility of using stock to third parties. Decisions about buying cane, choice of processes and products for sale are made in a weekly mobile planning horizon of 52 weeks, which includes the time of harvest in central-southern Brazil (mid-March to mid-December, approximately 36 weeks) plus the off-season (approximately 16 weeks, from mid-December to mid March). The demand for better marketing strategies to help such decision making is a constant need for entrepreneurs in the sector, which are often surprised by the changes in prices of sugar and alcohol in the internal, external and future market. In the commercial part, this study uses the Delphi method of forecasting the price of sugar and alcohol that guides the decision-making in planning and controlling the production of sugar and alcohol plants. Hedging is defined as a financial transaction to protect certain assets of a business against unexpected changes in prices. In this work, it is used a model of choice of product mix for Hedge linked to profitability and minimizing risk named Model of Semi-Variance analysis with scenarios of Markowitz. In decisions related to the agricultural, industrial and commercial parts it is used a type of mixed integer linear programming and to solve it is used the mathematical programming software LINGO and its interface with Excel spreadsheets. In decisions related to the optimal mix for Hedge in each week, is used a quadratic programming model solved by LINGO and its interface with Excel spreadsheets. A case study was conducted in a sugar mill and alcohol in the city of Junqueirópolis (SP) to validate the proposed model.
|
228 |
Energy Conservation for Collaborative Applications in Wireless Sensor Networks / Conservation d'énergie pour les applications collaboratives dans les réseaux de capteurs sans filDemigha, Oualid 29 November 2015 (has links)
Les réseaux de capteurs sans fil est une technologie nouvelle dont les applications s'étendent sur plusieurs domaines: militaire, scientifique, médicale, industriel, etc. La collaboration entre les noeuds capteurs, caractérisés par des capacités minimales en termes de capture, de transmission, de traitement et d'énergie, est une nécessité pour réaliser des tâches aussi complexes que la collecte des données, le pistage des objets mobiles, la surveillance des zones sensibles, etc. La contrainte matérielle sur le développement des ressources énergétiques des noeuds capteurs est persistante. D'où la nécessité de l'optimisation logicielle dans les différentes couches de la pile protocolaire et du système d'exploitation des noeuds. Dans cette thèse, nous approchons le problème d'optimisation d'énergie pour les applications collaboratives via les méthodes de sélection des capteurs basées sur la prédiction et la corrélation des données issues du réseau lui-même. Nous élaborons plusieurs méthodes pour conserver les ressources énergétiques du réseau en utilisant la prédiction comme un moyen pour anticiper les actions des noeuds et leurs rôles afin de minimiser le nombre des noeuds impliqués dans la tâche en question. Nous prenons l'application de pistage d'objets mobiles comme un cas d'étude. Ceci, après avoir dresser un état de l'art des différentes méthodes et approches récentes utilisées dans ce contexte. Nous formalisons le problème à l'aide d'un programme linéaire à variables binaires dans le but de trouver une solution générale exacte. Nous modélisons ainsi le problème de minimisation de la consommation d'énergie des réseaux de capteurs sans fil, déployé pour des applications de collecte de données soumis à la contrainte de précision de données, appelé EMDP. Nous montrons que ce problème est NP-Complet. D'où la nécessité de solutions heuristiques. Comme solution approchée, nous proposons un algorithme de clustering dynamique, appelé CORAD, qui adapte la topologie du réseau à la dynamique des données capturées afin d'optimiser la consommation d'énergie en exploitant la corrélation qui pourrait exister entre les noeuds. Toutes ces méthodes ont été implémentées et testées via des simulations afin de montrer leur efficacité. / Wireless Sensor Networks is an emerging technology enabled by the recent advances in Micro-Electro-Mechanical Systems, that led to design tiny wireless sensor nodes characterized by small capacities of sensing, data processing and communication. To accomplish complex tasks such as target tracking, data collection and zone surveillance, these nodes need to collaborate between each others to overcome the lack of battery capacity. Since the development of the batteries hardware is very slow, the optimization effort should be inevitably focused on the software layers of the protocol stack of the nodes and their operating systems. In this thesis, we investigated the energy problem in the context of collaborative applications and proposed an approach based on node selection using predictions and data correlations, to meet the application requirements in terms of energy-efficiency and quality of data. First, we surveyed almost all the recent approaches proposed in the literature that treat the problem of energy-efficiency of prediction-based target tracking schemes, in order to extract the relevant recommendations. Next, we proposed a dynamic clustering protocol based on an enhanced version of the Distributed Kalman Filter used as a prediction algorithm, to design an energy-efficient target tracking scheme. Our proposed scheme use these predictions to anticipate the actions of the nodes and their roles to minimize their number in the tasks. Based on our findings issued from the simulation data, we generalized our approach to any data collection scheme that uses a geographic-based clustering algorithm. We formulated the problem of energy minimization under data precision constraints using a binary integer linear program to find its exact solution in the general context. We validated the model and proved some of its fundamental properties. Finally and given the complexity of the problem, we proposed and evaluated a heuristic solution consisting of a correlation-based adaptive clustering algorithm for data collection. We showed that, by relaxing some constraints of the problem, our heuristic solution achieves an acceptable level of energy-efficiency while preserving the quality of data.
|
229 |
Batch Processsor Scheduling - A Class Of Problems In Steel Casting FoundriesRamasubramaniam, M 06 1900 (has links)
Modern manufacturing systems need new types of scheduling methods. While traditional scheduling methods are primarily concerned with sequencing of jobs, modern manufacturing environments provide the additional possibility to process jobs in batches. This adds to the complexity of scheduling. There are two types of batching namely: (i) serial batching (jobs may be batched if they share the same setup on a machine and one job is processed at a time. The machine which processes jobs in this manner is called as discrete processor) and (ii) parallel batching (several jobs can be processed simultaneously on a machine at a time. The machine which processes jobs in this manner is called as batch processor or batch processing machine).
Parallel batching environments have attracted wide attention of the researchers working in the field of scheduling. Particularly, taking inspiration from studies of scheduling batch processors in semiconductor manufacturing [Mathirajan and Sivakumar (2006b) and Venkataramana (2006)] and in steel casting industries [Krishnaswamy et al. (1998), Shekar (1998) and Mathirajan (2002)] in the Management Studies Department of Indian Institute of Science, this thesis addresses a special problem on scheduling batch processor, observed in the steel casting manufacturing.
A fundamental feature of the steel casting industry is its extreme flexibility, enabling castings to be produced with almost unlimited freedom in design over an extremely wide range of sizes, quantities and materials suited to practically every environment and application. Furthermore, the steel casting industry is capital intensive and highly competitive.
From the viewpoint of throughput and utilization of the important and costly resources in the foundry manufacturing, it was felt that the process-controlled furnace operations for the melting and pouring operations as well as the heat-treatment furnace operations are critical for meeting the overall production schedules. The two furnace operations are batch processes that have distinctive constraints on job-mixes in addition to the usual capacity and technical constraints associated with any industrial processes. The benefits of effective scheduling of these batch processes include higher machine utilization, lower work-in-process (WIP) inventory, shorter cycle time and greater customer satisfaction [Pinedo (1995)].
Very few studies address the production planning and scheduling models for a steel foundry, considering the melting furnace of the pre-casting stage as the core foundry operation [Voorhis et al. (2001), Krishnaswamy et al. (1998) and Shekar (1998)]. Even though the melting and pouring operations may be considered as the core of foundry operations and their scheduling is of central importance, the scheduling of heat-treatment furnaces is also of considerable importance. This is because the processing time required at the heat treatment furnace is often longer compared to other operations in the steel-casting foundry and therefore considerably affects the scheduling, overall flow time and WIP inventory.
Further, the heat-treatment operation is critical because it determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. It is also important to note that the heat-treatment operation is the only predominantly long process in the entire steel casting manufacturing process, taking up a large part of total processing time (taking up to a few days as against other processes that typically take only a few hours). Because of these, the heat-treatment operation is a major bottleneck operation in the entire steel casting process.
The jobs in the WIP inventory in front of heat-treatment furnace vary widely in sizes (few grams to a ton) and dimensions (from 10 mm to 2000 mm). Furthermore, castings are primarily classified into a number of job families based on the alloy type, such as low alloy castings and high alloy castings. These job families are incompatible as the temperature requirement for low alloy and high alloy vary for similar type of heat-treatment operation required. These job families are further classified into various sub-families based on the type of heat treatment operations they undergo. These sub-families are also incompatible as each of these sub-families requires a different combination of heat-treatment operation. The widely varying job sizes, job dimensions and multiple incompatible job family characteristic introduce a high degree of complexity into scheduling heat-treatment furnace.
Scheduling of heat-treatment furnace with multiple incompatible job families can have profound effect on the overall production rate as the processing time at heat-treatment operation is very much longer. Considering the complexity of the process and time consumed by the heat treatment operation, it is imperative that efficient scheduling of this operation is required in order to maximize throughput and to enhance productivity of the entire steel casting manufacturing process. This is of importance to the firm. The concerns of the management in increasing the throughput of the bottleneck machine, thereby increasing productivity, motivated us to adopt the scheduling objective of makespan.
In a recent observation of heat-treatment operations in a couple of steel casting industries and the research studies reported in the literature, we noticed that the real-life problem of dynamic scheduling of heat-treatment furnace with multiple incompatible job families, non-identical job sizes, non-identical job dimensions, non-agreeable release times and due dates to maximize the throughput, higher utilization and minimize the work-in-process inventory is not at all addressed. However, there are very few studies [Mathirajan et al. (2001, 2002, 2004a, 2007)] which have addressed the problem of scheduling of heat-treatment furnace with incompatible job families and non-identical job sizes to maximize the utilization of the furnace. Due to the difference between the real-life situation on dynamic scheduling of heat-treatment furnace of the steel casting manufacturing and the research reported on the same problem, we identified three new class of batch processor problems, which are applicable to a real-life situation based on the type of heat-treatment operation(s) being carried out and the type of steel casting industry (small, medium and large scale steel casting industry) and this thesis addresses these new class of research problems on scheduling of batch processor.
The first part of the thesis addresses our new Research Problem (called Research Problem 1) of minimizing makespan (Cmax) on a batch processor (BP) with single job family (SJF), non-identical job sizes (NIJS), and non-identical job dimensions (NIJD). This problem is of interest to small scale steel casting industries performing only one type of heat treatment operation such as surface hardening. Generally, there would be only a few steel casting industries which offer such type of special heat-treatment operation and thus the customer is willing to accept delay in the completion of his orders. So, the due date issues are not important for these types of industries.
We formulate the problem as Mixed Integer Linear Programming (MILP) model and validate the proposed MILP model through a numerical example. In order to understand the computational intractability issue, we carry out a small computational experiment. The results of this experiment indicate that the computational time required, as a function of problem size, for solving the MILP model is non-deterministic and non-polynomial.
Due to the computational intractability of the proposed MILP model, we propose five variants of a greedy heuristic algorithm and a genetic algorithm for addressing the Research Problem 1. We carry out computational experiments to obtain the performance of heuristic algorithms based on two perspectives: (i) comparison with optimal solution on small scale instances and (ii) comparison with lower bound for large scale instances. We choose five important problem parameters for the computational experiment and propose a suitable experimental design to generate pseudo problem instances.
As there is no lower bound (LB) procedure for the Research Problem1, in this thesis, we develop an LB procedure that provides LB on makespan by considering both NIJS and NIJD characteristics together. Before using the proposed LB procedure for evaluating heuristic algorithms, we conduct a computational experiment to obtain the quality of the LB on makespan in comparison with optimal makespan on number of small scale instances. The results of this experiment indicate that the proposed LB procedure is efficient and could be used to obtain LB on makespan for any large scale problem.
In the first perspective of the evaluation of the performance of the heuristic algorithms proposed for Research Problem 1, the proposed heuristic algorithms are run through small scale problem instances and we record the makespan values. We solve the MILP model to obtain optimal solutions for these small scale instances. For comparing the proposed heuristic algorithms we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to optimal solution and (b) average loss with respect to optimal solution in percentage.
In the second perspective of the evaluation of the performance of the heuristic algorithms, the proposed heuristic algorithms are run through large scale problem instances and we record the makespan values. The LB procedure is also run through these problem instances to obtain LB on makespan. For comparing the performance of heuristic algorithms with respect to LB on makespan, we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to LB on makespan (b) average loss with respect to LB on makespan in percentage, (c) average relative percentage deviation and (d) maximum relative percentage deviation.
We extend the Research Problem 1 by including additional job characteristics: job arrival time to WIP inventory area of heat-treatment furnace, due date and additional constraint on non-agreeable release time and due date (NARD). Due date considerations and the constraint on non-agreeable release times and due date (called Research Problem 2) are imperative to small scale steel casting foundries performing traditional but only one type of heat treatment operation such as annealing where due date compliance is important as many steel casting industries offer such type of heat treatment operations. The mathematical model, LB procedure, greedy heuristic algorithm and genetic algorithm proposed for Research Problem 1, including the computational experiments, are appropriately modified and\or extended for addressing Research Problem 2.
Finally, we extend the Research Problem 2 is by including an additional real life dimension: multiple incompatible job families (MIJF). This new Research Problem (called Research Problem 3) is more relevant to medium and large scale steel casting foundries performing more than one type of heat treatment operations such as homogenizing and tempering, normalizing and tempering. The solution methodologies, the LB procedure and the computational experiments proposed for Research Problem 2 are further modified and enriched to address the Research Problem 3.
From the detailed computational experiments conducted for each of the research problems defined in this study, we observe that: (a) the problem parameters considered in this study have influence on the performance of the heuristic algorithms, (b) the proposed LB procedure is found to be efficient, (c) the proposed genetic algorithm outperforms among the proposed heuristic algorithms (but the computational time required for genetic algorithm increases as problem size keeps increasing), and (d) in case the decision maker wants to choose an heuristic algorithm which is computationally most efficient algorithm among the proposed algorithms, the variants of greedy heuristic algorithms : SWB, SWB(NARD), SWB(NARD&MIJF) is relatively the best algorithm for Research Problem 1, Research Problem 2 and Research Problem 3 respectively.
|
230 |
Homing-Architekturen für Multi-Layer Netze: Netzkosten-Optimierung und Leistungsbewertung / Homing Architectures in Multi-Layer Networks: Cost Optimization and Performance AnalysisPalkopoulou, Eleni 21 December 2012 (has links) (PDF)
Die schichtenübergreifende Steuerung von Multi-Layer Netzen ermöglicht die Realisierung fortgeschrittener Netzarchitekturen sowie neuartiger Konzepte zur Steigerung der Ausfallsicherheit. Gegenstand dieser Arbeit ist ein neues ressourcensparendes Konzept zur Kompensation von Core-Router-Ausfallen in IP-Netzen. Core-Router-Ausfälle führen zur Abkopplung der an Ihnen angeschlossenen Zugangsrouter vom Netz. Daher werden die Zugangsrouter üblicherweise mit jeweils zwei oder mehreren verschiedenen Core-Routern verbunden (engl.: dual homing) was jedoch eine Verdoppelung der Anschlusskapazität im IP Netz bedingt. Bei dem neuen Verfahren - Dual Homing mit gemeinsam genutzten Router-Ersatzressourcen (engl.: dual homing with shared backup router resources, DH-SBRR) - erfolgt die Zugangsrouter-Anbindung zum einen zu einem Core-Router des IP-Netzes und zum anderen zu einem Netzelement der darunterliegenden Transportschicht. Damit lassen sich Router-Ersatzressourcen, die im IP-Netz an beliebigen Stellen vorgehalten werden können, uber das Transportnetz an die Stelle eines ausgefallenen Core-Routers schalten. Die Steuerung dieser Ersatzschaltung geschieht über eine schichten übergreifende, d.h. das Transportnetz- und IP-Netz umfassende Control-Plane - beispielsweise auf Basis von GMPLS. Da beim Umschalten der Routerressourcen auch aktuelle Zustände (bspw. Routing-Tabellen) auf die Router-Ersatzressourcen mit übertragen werden müssen, beinhaltet das neue Verfahren auch Konzepte zur Router-Virtualisierung.
Zum Vergleich und zur Bewertung der Leistungsfähigkeit des neuen DH-SBRR Verfahrens werden in der Arbeit verschiedene Zugangsrouter-Homing-Varianten hinsichtlich Netz-Kosten, Netz-Verfügbarkeit, Recovery-Zeit und Netz-Energieverbrauch gegenübergestellt. Als Multi-Layer Netzszenarien werden zum einen IP über WDM und zum anderen IP über OTN (ODU) betrachtet.
Zur Bestimmung der minimalen Netz-Kosten ist ein generisches Multi-Layer Netzoptimierungsmodell entwickelt worden, welches bei unterschiedlichen Homing-Architekturen angewendet werden kann. Neben dem Optimierungsmodell zur Netzkostenminimierung wird auch eine Modellvariante zur Minimierung des Energieverbrauchs vorgestellt. Um die Rechenzeit für die Lösung der Optimierungsprobleme zu verringern und damit auch größere Netzszenarien untersuchen zu können bedarf es heuristischer Lösungsverfahren. Im Rahmen der Arbeit ist daher eine neue speziell auf die Multilayer-Optimierungsprobleme zugeschnittene Lösungsheuristik entwickelt worden.
Aus der Netzkosten-Optimierung ergibt sich, dass durch den Einsatz von DH-SBBR signifikante Kosteneinsparungen im Vergleich zu herkömmlichen Homing-Architekturen realisiert werden können. Änderungen der Verkehrslast, der Kosten der IP-Netzelemente oder der Netztopologie haben keinen signifikanten Einfluss auf dieses Ergebnis.
Neben dem Kosten- und Energieeinsparungspotential sind auch die Auswirkungen auf die Netz-Verfügbarkeit und die Recovery-Zeit untersucht worden. Für die Ende-zu-Ende Verfügbarkeit bei Anwendung der verschiedenen Homing-Architekturen Können untere Grenzwerte angegeben werden. Zur Bestimmung der Recovery-Zeit bei Einsatz von DH-SBRR ist ein eigenes analytisches Berechnungsmodell entwickelt und evaluiert worden. Damit kann das DH-SBRR Verfahren zur Einhaltung vorgegebener Recovery-Zeiten (wie sie für bspw. Für bestimmte Dienste gefordert werden) entsprechend parametriert werden. / The emergence of multi-layer networking capabilities opens the path for the development of advanced network architectures and resilience concepts. In this dissertation we propose a novel resource-efficient homing scheme: dual homing with shared backup router resources. The proposed scheme realizes shared router-level redundancy, enabled by the emergence of control plane architectures such as generalized multi-protocol label switching. Additionally, virtualization schemes complement the proposed architecture. Different homing architectures are examined and compared under the prism of cost, availability, recovery time and energy efficiency. Multiple network layers are considered in Internet protocol over wavelength division multiplexing as well as Internet protocol over optical data unit settings - leading to the development of multi-layer optimization techniques.
A generic multi-layer network design mathematical model, which can be applied to different homing architecture considerations, is developed. The optimization objective can be adapted to either minimizing the cost for network equipment or the power consumption of the network. In order to address potential issues with regard to computational complexity, we develop a novel heuristic approach specifically targeting the proposed architecture. It is shown that significant cost savings can be achieved - even under extreme changes in the traffic demand volume, in the cost for different types of network equipment, as well as in the network topology characteristics.
In order to evaluate occurring tradeoffs in terms of performance, we study the effects on availability and recovery time. We proceed to derive lower bounds on end-to-end availability for the different homing architectures. Additionally, an analytical recovery time model is developed and evaluated. We investigate how service-imposed maximum outage requirements have a direct effect on the setting of the proposed architecture.
|
Page generated in 0.0749 seconds