• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 48
  • 15
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 181
  • 181
  • 181
  • 157
  • 64
  • 46
  • 45
  • 36
  • 36
  • 35
  • 34
  • 27
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Towards Flexible Power Generation Short-term Optimization of a Combined Cycle Power Plant Integrated with an Inlet Air Conditioning Unit

Mantilla Gutierrez, Weimar January 2019 (has links)
Combined cycle gas turbine power plants (CCGT), as part of the electricity generation fleet, are required to improve their flexibility to help balance the power system under new scenarios with high shares of variable renewable sources. Among the different possibilities to enhance the power plant performance, an inlet air conditioning unit offers the benefit of power augmentation and “minimum environmental load” reduction by controlling the gas turbine intake temperature using cold thermal energy storage and a heat pump. In this thesis, an evaluation of the conditioning unit impact over a power-oriented CCGT under a day-ahead optimized operation strategy is presented. To establish the hourly dispatch of the power plant and the right operation mode of the inlet condition unit bringing the desired benefits, a mixed-integer linear optimization was formulated aiming to maximize the operational profit of the plant within a 24 hours horizon. To assess the impact of the proposed unit operating under this control strategy, annual simulations of a reference power plant were developed with and without the unit, allowing to a comparison of their performance by means of technical and economic indicators. Furthermore, a case study changing equipment sizes was performed in order to identify trends of the power plant performance related to such parameters; and lastly, a sensitivity analysis on market conditions to test the control strategy response was included. The results indicate that the inlet conditioning unit together with the dispatch optimization increase the power plant operational profit trough the gain of power variation over peak and off-peak periods. For the specific case study in northern Italy, it is shown that a power plant integrated with the conditioning unit is more profitable in terms of net present value based on the undertaken investment figures. Related to the technical performance, it also shows that the unit reduces by 1,34% the minimal environmental load when part-load operations are required and that it can increase the net power output by 0.17% annually. All in all, this study presents the benefits of a dispatch optimization strategy when couple to a novel solution to increase CCGT flexibility. / Elproducerande kombikraftverk (CCGT) förväntas förbättra sin flexibilitet för att kunna bidra till stabilisering av elnätet i framtida scenarier med ökande andel variabla, förnybara energikällor. Av de diverse metoder som finns att tillgå för att förbättra ett kraftverks prestanda, erbjuder en inluftsbehandlingsenhet både fördelar med kraftförbättring samt minskning av “minimun environmental load”; genom att med hjälp av kall termisk energilagring och en värmepump kontrollera gasens inluftstemperatur till gasturbinen. I den här uppsatsen undersöks hur en sådan inluftsbehandlingsenhet påverkar prestandan hos en kraftproduktionsfokuserad CCGT när en optimerad driftsstrategi introduceras. För att bestämma kraftverkets elproduktion vid varje timme och det korrekta driftläget för luftbehandlingsenheten (för att uppnå tidigare nämnda eftersökta fördelar) formulerades ett linjärt optimeringsproblem med syfte att maximera kraftverkets driftsförtjänst under ett 24-timmars tidsspann. För att bedöma den föreslagna inluftsbehandlingsenhetens inverkan under den optimerade driftsstrategin genomfördes simuleringar av ett referenskraftverk med och utan nämnda enhet, varpå en jämförelse med avseende på teknisk prestanda och ekonomi genomfördes. Vidare genomfördes en fallstudie där storlek på diverse utrustning varierades för att kunna identifiera trender i kraftverksprestanda baserat på dessa parametrar; slutligen genomfördes en känslighetsanalys rörande hur luftbehandlingsenheten och kontrollstrategin reagerar vid olika marknader.. Resultaten indikerar att en inluftsbehandlingsenhet tillsammans med en optimerad driftsstrategi ökar kraftverkets driftsvinning genom en ökad variation i kraftuttag över peak och off-peak timmar. För fallstudien i norra Italien fanns att ett kraftverk med integrerad luftbehandlingsenhet är mer lönsamt sett till nuvärdesanalys. Gällande teknisk prestanda visade resultaten att enheten minskar den minsta miljöbelastningen med 1,34 % när delbelastningsdrift fordras, och att det kan öka nettokraftuttag med 0,17% årligen. Sammanfattningsvis presenterar denna studie fördelarna med ett driftsoptimerat kraftverk kopplat till en ny lösning för att öka flexibilitet hos CCGT:er.
152

Optimization of energy dispatch in concentrated solar power systems : Design of dispatch algorithm in concentrated solar power tower system with thermal energy storage for maximized operational revenue

Strand, Anna January 2019 (has links)
Concentrated solar power (CSP) is a fast-growing technology for electricity production. With mirrors (heliostats) irradiation of the sun is concentrated onto a receiver run through by a heat transfer fluid (HTF). The fluid by that reaches high temperatures and is used to drive a steam turbine for electricity production. A CSP power plant is most often coupled with an energy storage unit, where the HTF is stored before it is dispatched and used to generate electricity. Electricity is most often sold at an open market with a fluctuating spot-prices. It is therefore of high importance to generate and sell the electricity at the highest paid hours, increasingly important also since the governmental support mechanisms aimed to support renewable energy production is faded out since the technology is starting to be seen as mature enough to compete by itself on the market. A solar power plant thus has an operational protocol determining when energy is dispatched, and electricity is sold. These protocols are often pre-defined which means an optimal production is not achieved since irradiation and electricity selling price vary. In this master thesis, an optimization algorithm for electricity sales is designed (in MATLAB). The optimization algorithm is designed by for a given timeframe solve an optimization problem where the objective is maximized revenue from electricity sales from the solar power plant. The function takes into consideration hourly varying electricity spot price, hourly varying solar field efficiency, energy flows in the solar power plant, start-up costs (from on to off) plus conditions for the logic governing the operational modes. Two regular pre-defined protocols were designed to be able to compare performance in a solar power plant with the optimized dispatch protocol. These three operational protocols were evaluated in three different markets; one with fluctuating spot price, one regulated market of three fixed price levels and one in spot market but with zero-prices during sunny hours. It was found that the optimized dispatch protocol gave both bigger electricity production and revenue in all markets, but with biggest differences in the spot markets. To evaluate in what type of powerplant the optimizer performs best, a parametric analysis was made where size of storage and power block, the time-horizon of optimizer and the cost of start-up were varied. For size of storage and power block it was found that revenue increased with increased size, but only up to the level where the optimizer can dispatch at optimal hours. After that there is no increase in revenue. Increased time horizon gives increased revenue since it then has more information. With a 24-hour time horizon, morning price-peaks will be missed for example. To change start-up costs makes the power plant less flexible and with fewer cycles, without affect income much. / Koncentrerad solkraft (CSP) är en snabbt växande teknologi för elektricitets-produktion. Med speglar (heliostater) koncentreras solstrålar på en mottagare som genomflödas av en värmetransporteringsvätska. Denna uppnår därmed höga temperaturer vilket används för att driva en ångturbin för att generera el. Ett CSP kraftverk är oftast kopplat till en energilagringstank, där värmelagringsvätskan lagras innan den används för att generera el. El säljs i de flesta fall på en öppen elmarknad, där spotpriset fluktuerar. Det är därför av stor vikt att generera elen och sälja den vid de timmar med högst elpris, vilket också är av ökande betydelse då supportmekanismerna för att finansiellt stödja förnybar energiproduktion används i allt mindre grad för denna teknologi då den börjar anses mogen att konkurrera utan. Ett solkraftverk har således ett driftsprotokoll som bestämmer när el ska genereras. Dessa protokoll är oftast förutbestämda, vilket innebär att en optimal produktion inte fås då exempelvis elspotpriset och solinstrålningen varierar. I detta examensarbete har en optimeringsalgoritm för elförsäljning designats (i MATLAB). Optimeringsscriptet är designat genom att för en given tidsperiod lösa ett optimeringsproblem där objektivet är maximerad vinst från såld elektricitet från solkraftverket. Funktionen tar hänsyn till timvist varierande elpris, timvist varierande solfältseffektivitet, energiflöden i solkraftverket, kostnader för uppstart (on till off) samt villkor för att logiskt styra de olika driftlägena. För att jämföra prestanda hos ett solkraftverk med det optimerade driftsprotokollet skapades även två traditionella förutbestämda driftprotokoll. Dessa tre driftsstrategier utvärderades i tre olika marknader, en med ett varierande el-spotpris, en i en reglerad elmarknad med tre prisnivåer och en i en marknad med spotpris men noll-pris under de soliga timmarna. Det fanns att det optimerade driftsprotokollet gav både större elproduktion och högre vinst i alla marknader, men störst skillnad fanns i de öppna spotprismarknaderna. För att undersöka i vilket slags kraftverk som protokollet levererar mest förbättring i gjordes en parametrisk analys där storlek på lagringstank och generator varierades, samt optimerarens tidshorisont och kostnad för uppstart. För lagringstank och generator fanns att vinst ökar med ökande storlek upp tills den storlek optimeraren har möjlighet att fördela produktion på dyrast timmar. Ökande storlek efter det ger inte ökad vinst. Ökande tidshorisont ger ökande vinst eftersom optimeraren då har mer information. Att ändra uppstartkostnaden gör att solkraftverket uppträder mindre flexibelt och har färre cykler, dock utan så stor påverkan på inkomst.
153

Weekly planning of hydropower in systems with large volumes of varying power generation

Ahlfors, Charlotta January 2022 (has links)
Hydropower is the world’s largest source of renewable electricity generation. Hydropower plants with reservoirs provide flexibility to the power systems. Efficient planning techniques improve the flexibility of the power systems and reduce carbon emissions, which is needed in power systems experiencing a rapid change in balance between power production and consumption. This is due to increasing amount of renewable energy sources, such as wind and solar power. Hydropower plants have low operating costs and are used as base power. This thesis focuses on weekly planning of hydropower in systems with large volumes and varying power generation and a literature review and a maintenance scheduling method are presented. The topic of hydropower planning is well investigated and various research questions have been studied under many years in different countries. Some of the works are summarized and discussed in literature reviews, which are presented in this thesis. First, some reviews are presented, which covers several aspects of hydropower planning. Literature reviews for long term, mid term and short term planning, respectively, are described. Maintenance scheduling in power systems consists of preventive and corrective maintenance. Preventive maintenance is performed at predetermined intervals according to a prescribed criteria. This type of maintenance is important for power producers to avoid loss in electricity production and loss in income. The maintenance scheduling for hydropower plants prevent these phenomena since spill in the reservoirs and wear on the turbines can be avoided. Usually, the maintenance in hydropower plants is performed on the turbines or at the reservoir intake. A deterministic and a stochastic method to solve a mid term maintenance scheduling problem formulated as a Mixed Integer Linear Programming using dynamic programming is presented. The deterministic method works well in terms of computational time and accuracy. The stochastic method compared to the deterministic method yields a slightly better result at the cost of a need for larger computational resources. / Vattenkraft är världens största källa till förnyelsebar elproduktion. Vattenkraftverk med magasin erbjuder flexibilitet till elkraftsystem. Effektiva planeringsmetoder förbättrar flexibiliteten hos kraftsystemen och minskar koldioxidutsläppen, vilket är nödvändigt i kraftsystem som utsätts för snabb förändring med obalans mellan produktion och konsumtion av effekt. Detta beror på ökad andel förnyelsebara energikällor, som vind- och solkraft, i kraftsystemen. Vattenkraftverk har låga driftkostnader och används som baskraft. Den här avhandlingen fokuserar på veckoplanering av vattenkraft i kraftsystem med stora volymer och varierande kraftproduktion, samt en litteraturstudie och en metod för underhållsplanering presenteras.    Ämnet vattenkraftplanering är väl undersökt och varierande forskningsfrågor har studerats under många år i olika länder. En del av arbetena sammanfattas och diskuteras i litteraturstudier, vilka presenteras i den här avhandlingen. Först presenteras några litteraturstudier, som täcker flera aspekter av vattenkraftplanering. Litteraturstudier, för långtids-, medeltidsplanering, respektive korttidsplanering beskrivs.    Underhållsplanering i elkraftsystem består av förebyggande och korrigerande underhåll. Förebyggande underhåll utförs vid förutbestämda intervall enligt förbestämda kriterier. Denna typ av underhåll är viktig för att kraftproducenter ska kunna undvika förlorad elproduktion och förlorad inkomst. Underhållsplaneringen för vattenkraftverk förebygger dessa fenomen, eftersom spill i magasinen och slitage på turbinerna kan undvikas. Vanligen utförs underhållen i vattenkraftverken på turbinerna eller vid intaget i magasinet. En deterministisk metod och en stokastisk metod att lösa ett medeltidsplaneringsproblem, formulerat som ett blandat heltalsprogrammeringsproblem presenteras. Den deterministiska metoden fungerar väl i termer av beräkningstid och noggrannhet. Den stokastiska metoden jämfört med den deterministiska metoden ger ett något bättre resultat dock till priset av ett behov av större datorresurser. / <p>QC 20220920</p>
154

Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionales

Aceituno Peinado, José María 28 March 2024 (has links)
Tesis por compendio / [ES] Los sistemas de tiempo real se caracterizan por exigir el cumplimento de unos requisitos temporales que garanticen el funcionamiento aceptable de un sistema. Especialmente, en los sistemas de tiempo real estricto estos requisitos temporales deben ser inviolables. Estos sistemas suelen aplicarse en áreas como la aviación, la seguridad ferroviaria, satélites y control de procesos, entre otros. Por tanto, el incumplimiento de un requisito temporal en un sistema de tiempo real estricto puede ocasionar un fallo catastrófico. La planificación de sistemas de tiempo real es una área en la que se estudian y aplican diversas metodologías, heurísticas y algoritmos que intentan asignar el recurso de la CPU sin pérdidas de plazo. El uso de sistemas de computación multinúcleo es una opción cada vez más recurrente en los sistemas de tiempo real estrictos. Esto se debe, entre otras causas, a su alto rendimiento a nivel de computación gracias a su capacidad de ejecutar varios procesos en paralelo. Por otro lado, los sistemas multinúcleo presentan un nuevo problema, la contención que ocurre debido a la compartición de los recursos de hardware. El origen de esta contención es la interferencia que en ocasiones ocurre entre tareas asignadas en distintos núcleos que pretenden acceder al mismo recurso compartido simultáneamente, típicamente acceso a memoria compartida. Esta interferencia añadida puede suponer un incumplimiento de los requisitos temporales, y por tanto, la planificación no sería viable. En este trabajo se proponen nuevas metodologías y estrategias de planificación no convencionales para aportar soluciones al problema de la interferencia en sistemas multinúcleo. Estas metodologías y estrategias abarcan algoritmos de planificación, algoritmos de asignación de tareas a núcleos, modelos temporales y análisis de planificabilidad. El resultado del trabajo realizado se ha publicado en diversos artículos en revistas del área. En ellos se presentan estas nuevas propuestas que afrontan los retos de la planificación de tareas. En la mayoría de los artículos presentados la estructura es similar: se introduce el contexto en el que nos situamos, se plantea la problemática existente, se expone una propuesta para solventar o mejorar los resultados de la planificación, después se realiza una experimentación para evaluar de forma práctica la metodología propuesta, se analizan los resultados obtenidos y finalmente se exponen unas conclusiones sobre la propuesta. Los resultados de las metodologías no convencionales propuestas en los artículos que conforman esta tesis muestran una mejora del rendimiento de las planificaciones en comparación con algoritmos clásicos del área. Especialmente la mejora se produce en términos de disminución de la interferencia producida y mejora de la tasa de planificabilidad. / [CA] Els sistemes de temps real es caracteritzen per exigir el compliment d'uns requisits temporals que garantisquen el funcionament acceptable d'un sistema. Especialment, en els sistemes de temps real estricte aquests requisits temporals han de ser inviolables. Aquests sistemes solen aplicar-se en àrees com l'aviació, la seguretat ferroviària, satèl·lits i control de processos, entre altres. Per tant, l'incompliment d'un requisit temporal en un sistema de temps real estricte pot ocasionar un error catastròfic. La planificació de sistemes de temps real és una àrea en la qual s'estudien i apliquen diverses metodologies, heurístiques i algorismes que intenten assignar el recurs de la CPU sense pèrdues de termini. L'ús de sistemes de computació multinucli és una opció cada vegada més recurrent en els sistemes de temps real estrictes. Això es deu, entre altres causes, al seu alt rendiment a nivell de computació gràcies a la seua capacitat d'executar diversos processos en paral·lel. D'altra banda, els sistemes multinucli presenten un nou problema, la contenció que ocorre a causa de la compartició dels recursos de hardware. L'origen d'aquesta contenció és la interferència que a vegades ocorre entre tasques assignades en diferents nuclis que pretenen accedir al mateix recurs compartit simultàniament, típicament accés a memòria compartida. Aquesta interferència afegida pot suposar un incompliment dels requisits temporals, i per tant, la planificació no seria viable. En aquest treball es proposen noves metodologies i estratègies de planificació no convencionals per aportar solucions al problema de la interferència en sistemes multinucli. Aquestes metodologies i estratègies comprenen algorismes de planificació, algorismes d'assignació de tasques a nuclis, models temporals i anàlisis de planificabilitat. El resultat del treball realitzat s'ha publicat en diversos articles en revistes de l'àrea. En ells es presenten aquestes noves propostes que afronten els reptes de la planificació de tasques. En la majoria dels articles presentats l'estructura és similar: s'introdueix el context en el qual ens situem, es planteja la problemàtica existent, s'exposa una proposta per a solucionar o millorar els resultats de la planificació, després es realitza una experimentació per a avaluar de manera pràctica la metodologia proposada, s'analitzen els resultats obtinguts i finalment s'exposen unes conclusions sobre la proposta. Els resultats de les metodologies no convencionals proposades en els articles que conformen aquesta tesi mostren una millora del rendiment de les planificacions en comparació amb algorismes clàssics de l'àrea. Especialment, la millora es produeix en termes de disminució de la interferència produïda i millora de la taxa de planificabilitat. / [EN] Real-time systems are characterised by the demand for temporal constraints that guarantee acceptable operation and feasibility of a system. Especially, in hard real-time systems these temporal constraints must be respected. These systems are typically applied in areas such as aviation, railway safety, satellites and process control, among others. Therefore, a missed deadline in a hard-real time system can lead to a catastrophic failure. The scheduling of real-time systems is an area where various methodologies, heuristics and algorithms are studied and applied in an attempt to allocate the CPU resources without missing any deadline. The use of multicore computing systems is an increasingly recurrent option in hard real-time systems. This is due, among other reasons, to its high computational performance thanks to the ability to run multiple processes in parallel. On the other hand, multicore systems present a new problem, the contention that occurs due to the sharing of hardware resources. The source of this contention is the interference that sometimes happens between tasks allocated in different cores that try to access the same shared resource simultaneously, typically shared memory access. This added interference can lead to miss a deadline, and therefore, the scheduling would not be feasible. This paper proposes new non-conventional scheduling methodologies and strategies to provide solutions to the interference problem in multicore systems. These methodologies and strategies include scheduling algorithms, task allocation algorithms, temporal models and schedulability analysis. The results of this work have been published in several journal articles in the field. In these articles the new proposals are presented, they face the challenges of task scheduling. In the majority of these articles the structure is similar: the context is introduced, the existing problem is identified, a proposal to solve or improve the results of the scheduling is presented, then the proposed methodology is experimented in order to evaluate it in practical terms, the results obtained are analysed and finally conclusions about the proposal are expressed. The results of the non-conventional methodologies proposed in the articles that comprise this thesis show an improvement in the performance of the scheduling compared to classical algorithms in the area. In particular, the improvement is produced in terms of reducing the interference and a higher schedulability rate. / Esta tesis se ha realizado en el marco de dos proyectos de investigación de carácter nacional. Uno de ellos es el proyecto es PRECON-I4. Consiste en la búsqueda de sistemas informáticos predecibles y confiables para la industria 4.0. El otro proyecto es PRESECREL, que consiste en la búsqueda de modelos y plataformas para sistemas informáticos industriales predecibles, seguros y confiables. Tanto PRECON-I4 como PRESECREL son proyectos coordinados financiados por el Ministerio de Ciencia, Innovación y Universidades y los fondos FEDER (AEI/FEDER, UE). En ambos proyectos participa la Universidad Politécnica de Valencia, la Universidad de Cantabria y la Universidad Politécnica de Madrid. Además, en PRESECREL también participa IKERLAN S. COOP I.P. Además, parte de los resultados de esta tesis también han servido para validar la asignación de recursos temporales en sistemas críticos en el marco del proyecto METROPOLIS (PLEC2021-007609). / Aceituno Peinado, JM. (2024). Investigación de nuevas metodologías para la planificación de sistemas de tiempo real multinúcleo mediante técnicas no convencionales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/203212 / Compendio
155

Improving Utilisation of Rail Freight Routes by Optimised Routing

Bulteel, Raphaël January 2024 (has links)
The routing and scheduling problem is a complex task and a critical challenge in the railway sector, given the involvement of various railway infrastructural and operational characteristics. Moreover, this problem has a direct impact on the costs of railway services due to the routes trains are taking, and the environmental impact caused by the length of their journeys. Although numerous studies have addressed the routing and scheduling of passenger trains, fewer have focused on the routing and scheduling of freight trains. Therefore, the primary objective of this study is to enhance the utilization of rail freight routes by reducing the travel time of freight trains. To determine the optimal train routing and transit durations, as well as to identify the pivotal operational and infrastructural variables that impact the travel time of freight trains, a mixed-integer linear optimization model has been developed. Furthermore, enhancing the efficiency of rail freight entails leveraging automation and digitalization in rail freight transportation to improve rail performance, multimodal services, and end-user satisfaction due to an improvement in the utilization of the network. The AnyLogic simulation software tool has been employed to evaluate the influence of digital automatic coupling (DAC) on marshalling yard train processing time and yard capacity. DAC is seen as an enabling technology for automation in yards and the use of longer and faster trains on the line. The marshalling yard of Hallsberg will be used as a case study. The initial findings indicate that an increase in train length from 700 meters to 800 meters has a negligible impact on train schedules. In comparison to the prevailing scenario, the implementation of technologies that facilitate reduced inter-train spacing and halving of block separation enhances the capacity of the railway network. It is similarly important to accelerate train speeds by 10% in order to enhance the utilization of rail freight routes, which results in a more homogeneous traffic flow on the line due to a speed closer to that of passenger trains, thereby increasing the capacity of the network. The results of Hallsberg's marshalling yard demonstrate that the introduction of DAC Type 4 has led to an enhanced capacity for handling trains at the arrival, classification, and departure yards. Further capacity gains are possible with the adoption of DAC Type 5, which allows for the production of longer trains in all three parts of the marshalling yard. This is since a greater number of wagons are arriving, thereby enabling the formation of trains to be completed more expeditiously. However, the introduction of longer trains has led to a challenge in terms of the number of trains arriving at the marshalling yard. This is due to the fact that longer trains require a longer uncoupling and classification process. The primary rationale for enhancing capacity from the base to DAC Type 4 and DAC Type 5, in conjunction with the implementation of longer trains across all sections of the marshalling yard, is the reduction in uncoupling times. The integration of findings from the mixed-integer linear optimization model and the AnyLogic model for Hallsberg's marshalling yard serves to emphasize the significance of longer trains and the introduction of DAC Type 4 and DAC Type 5. The total travel time for trains traveling from Stockholm to Gothenburg, with a classification stop at Hallsberg, can be reduced by up to 33% for a single train. / Optimering av rutt- och schemaläggning är en komplex uppgift och en kritisk utmaning inom transport och järnvägssektorn, med tanke på involveringen av olika intressenter och komponenter. Dessutom har detta problem en direkt inverkan på kostnaderna för järnvägstjänster och deras miljöpåverkan. Medan många studier har behandlat rutt- och schemaläggning för persontåg, har färre fokuserat på problemet för godståg. Det primära syftet med denna studie är att förbättra utnyttjandet av rutt- och schemaläggningen av godståg genom minskad restid. En linjär optimeringsmodell med blandade heltal har utvecklats för att bestämma den optimala tågruttläggningen och transporttiden, samt för att identifiera de centrala drifts- och infrastrukturvariabler som påverkar godstågens restid. Att förbättra effektiviteten av järnvägsfrakt innebär dessutom att man utnyttjar automatisering och digitalisering inom järnvägsgodstransporter för att förbättra järnvägsprestanda, multimodala tjänster och service för slutanvändarna. Simuleringsverktyget AnyLogic har använts för att utvärdera inverkan av digital automatisk koppel (DAC) på servicetid på rangerbangård och bangårdskapacitet, särskilt i samband med modellering av Hallsbergs rangerbangård. De första resultaten tyder på att en ökning av tåglängden från 700 meter till 800 meter har en försumbar inverkan på tågtidtabellerna. Jämfört med det rådande scenariot ökar implementeringen av signalteknik som underlättar minskat avstånd mellan tågen och halvering av blockseparationen och därmed kapaciteten i järnvägsnätet. Likaså är det viktigt att öka tåghastigheterna med 10 % för att förbättra utnyttjandet av godsrutter på järnväg. Resultaten för Hallsbergs rangerbangård visar att införandet av DAC Typ 4 leder till en utökad kapacitet för att hantera tåg vid ankomst-, klassificerings- och avgångsbangårdarna. Ytterligare kapacitetsvinster är möjliga med antagandet av DAC Typ 5, som möjliggör automatiserad produktion av längre tåg i alla tre delar av rangerbangården. Detta beror på ett större antal inkommande vagnar, vilket gör att tågbildningen kan slutföras snabbare. Införandet av längre tåg har dock lett till en utmaning när det gäller antalet tåg som kommer till rangerbangården. Detta beror på att längre tåg kräver en längre frånkopplings- och klassificeringsprocess. Det primära skälet till att öka kapaciteten från utgångsbasen till DAC Typ 4 och DAC Typ 5, i samband med implementeringen av längre tåg över alla sektioner av rangerbangården, är minskningen av frånkopplingstider. Integrationen av resultaten från den linjära optimeringsmodellen med blandade heltal och AnyLogicmodellen för Hallsbergs rangerbangård understryker betydelsen av längre tåg och införandet av DAC Typ 4 och DAC Typ 5. Den totala restiden för tåg som reser från Stockholm till Göteborg, med rangerstopp i Hallsberg, kan sänkas med upp till 33 % för ett enskilt tåg. Omvänt, för tåg som reser från Göteborg till Stockholm, är minskningar av total restid med upp till 13 % för det första tåget och 11 % för det andra tåget.
156

Techniques d'analyse et d'optimisation pour la synthèse architecturale de systèmes temps réel embarqués distribués : problèmes de placement, de partitionnement et d'ordonnancement / Analysis and optimization techniques for the architectural synthesis of real time embedded and distributed systems

Mehiaoui, Asma 16 June 2014 (has links)
Dans le cadre industriel et académique, les méthodologies de développement logiciel exploitent de plus en plus le concept de “modèle” afin d’appréhender la complexité des systèmes temps réel critiques. En particulier, celles-ci définissent une étape dans laquelle un modèle fonctionnel, conçu comme un graphe de blocs fonctionnels communiquant via des échanges de signaux de données, est déployé sur un modèle de plateforme d’exécution matérielle et un modèle de plateforme d’exécution logicielle composé de tâches et de messages. Cette étape appelée étape de déploiement, permet d’établir une architecture opérationnelle du système nécessitant une validation des propriétés temporelles du système. Dans le contexte des systèmes temps réel dirigés par les évènements, la vérification des propriétés temporelles est réalisée à l’aide de l’analyse d’ordonnançabilité basée sur l’analyse des temps de réponse. Chaque choix de déploiement effectué a un impact essentiel sur la validité et la qualité du système. Néanmoins, les méthodologies existantes n’offrent pas de support permettant de guider le concepteur d’applications durant l’exploration de l’espace des architectures possibles. L’objectif de ces travaux de thèse consiste à mettre en place des techniques d’analyse et de synthèse automatiques permettant de guider le concepteur vers une architecture opérationnelle valide et optimisée par rapport aux performances du système. Notre proposition est dédiée à l’exploration de l’espace des architectures en tenant compte à la fois des quatre degrés de liberté déterminés durant la phase de déploiement, à savoir (j) le placement des éléments fonctionnels sur les éléments de calcul et de communication de la plateforme d’exécution, (ii) le partitionnement des éléments fonctionnels en tâches temps réel et des signaux de données en messages, (iii) l’affectation de priorités d’exécution aux tâches et aux messages du système et (iv) l’attribution du mécanisme de protection des données partagées pour les systèmes temps réel périodiques. Nous nous intéressons principalement à la satisfaction des contraintes temporelles et celles liées aux capacités des ressources de la plateforme cible. De plus, nous considérons l’optimisation des latences de bout-en-bout et la consommation mémoire. Les approches d’exploration architecturale présentées dans cette thèse sont basées sur la technique d’optimisation PLNE (programmation linéaire en nombres entiers) et concernent à la fois les applications activées périodiquement et celles dont l’activation est pilotée par les données. Contrairement à de nombreuses approches antérieures fournissant une solution partielle au problème de déploiement, les méthodes proposées considèrent l’ensemble du problème de déploiement. Les approches proposées dans cette thèse sont évaluées à l’aide d’applications génériques et industrielles. / Modern development methodologies from the industry and the academia exploit more and more the ”model” concept to address the complexity of critical real-time systems. These methodologies define a key stage in which the functional model, designed as a network of function blocks communicating through exchanged data signals, is deployed onto a hardware execution platform model and implemented in a software model consisting of a set of tasks and messages. This stage so-called deployment stage allows establishment of an operational architecture of the system, thus it requires evaluation and validation of the temporal properties of the system. In the context of event-driven real-time systems, the verification of temporal properties is performed using the schedulability analysis based on the response time analysis. Each deployment choice has an essential impact on the validity and the quality of the system. However, the existing methodologies do not provide supportto guide the designer of applications in the exploration of the operational architectures space. The objective of this thesis is to develop techniques for analysis and automatic synthesis of a valid operational architecture optimized with respect to the system performances. Our proposition is dedicated to the exploration of architectures space considering at the same time the four degrees of freedom determined during the deployment phase, (i) the placement of functional elements on the computing and communication resources of the execution platform, (ii) the partitioning of function elements into real time tasks and data signals into messages, (iii) the priority assignment to system tasks and messages and (iv) the assignment of shared data protection mechanism for periodic real-time systems. We are mainly interested in meeting temporal constraints and memory capacity of the target platform. In addition, we are focusing on the optimization of end-to-end latency and memory consumption. The design space exploration approaches presented in this thesis are based on the MILP (Mixed Integer Linear programming) optimization technique and concern at the same time time-driven and data-driven applications. Unlike many earlier approaches providing a partial solution to the deployment problem, our methods consider the whole deployment problem. The proposed approaches in this thesis are evaluated using both synthetic and industrial applications.
157

Gestion optimisée d'un modèle d'agrégation de flexibilités diffuses / Optimized management of a distributed demand response aggregation model

Prelle, Thomas 22 September 2014 (has links)
Le souhait d’augmenter la part des énergies renouvelables dans le mix énergétique entraine une augmentation des parts des énergies volatiles et non pilotables, et rend donc l’équilibre offre-demande difficile à satisfaire. Une façon d’intégrer ces énergies dans le réseau électrique actuel est d’utiliser de petits moyens de production, de consommation et de stockage répartis sur tout le territoire pour compenser les sous ou sur productions. Afin que ces procédés puissent être intégrés dans le processus d’équilibre offre-demande, ils sont regroupés au sein d’une centrale virtuelle d’agrégation de flexibilité, qui est vue alors comme une centrale virtuelle. Comme pour tout autre moyen de production du réseau, il est nécessaire de déterminer son plan de production. Nous proposons dans un premier temps dans cette thèse une architecture et un mode de gestion pour une centrale d’agrégation composée de n’importe quel type de procédés. Dans un second temps, nous présentons des algorithmes permettant de calculer le plan de production des différents types de procédés respectant toutes leurs contraintes de fonctionnement. Et enfin, nous proposons des approches pour calculer le plan de production de la centrale d’agrégation dans le but de maximiser son gain financier en respectant les contraintes réseau. / The desire to increase the share of renewable energies in the energy mix leads to an increase inshare of volatile and non-controllable energy and makes it difficult to meet the supply-demand balance. A solution to manage anyway theses energies in the current electrical grid is to deploy new energy storage and demand response systems across the country to counter balance under or over production. In order to integrate all these energies systems to the supply and demand balance process, there are gathered together within a virtual flexibility aggregation power plant which is then seen as a virtual power plant. As for any other power plant, it is necessary to compute its production plan. Firstly, we propose in this PhD thesis an architecture and management method for an aggregation power plant composed of any type of energies systems. Then, we propose algorithms to compute the production plan of any types of energy systems satisfying all theirs constraints. Finally, we propose an approach to compute the production plan of the aggregation power plant in order to maximize its financial profit while complying with all the constraints of the grid.
158

Meta-heurísticas Iterated Local Search, GRASP e Artificial Bee Colony aplicadas ao Job Shop Flexível para minimização do atraso total. / Meta-heuristics Iterated Local Search, GRASP and Artificial Bee Colony applied to Flexible Job Shop minimizing total tardiness.

Melo, Everton Luiz de 07 February 2014 (has links)
O ambiente de produção abordado neste trabalho é o Job Shop Flexível (JSF), uma generalização do Job Shop (JS). O problema de programação de tarefas, ou jobs, no ambiente JS é classificado por Garey; Johnson e Sethi (1976) como NP-Difícil e o JSF é, no mínimo, tão difícil quanto o JS. O JSF é composto por um conjunto de jobs, cada qual constituído por operações. Cada operação deve ser processada individualmente, sem interrupção, em uma única máquina de um subconjunto de máquinas habilitadas. O principal critério de desempenho considerado é a minimização dos atrasos dos jobs. São apresentados modelos de Programação Linear Inteira Mista (PLIM) para minimizar o atraso total e o instante de término da última operação, o makespan. São propostas novas regras de prioridade dos jobs, além de adaptações de regras da literatura. Tais regras são utilizadas por heurísticas construtivas e são aliadas a estratégias cujo objetivo é explorar características específicas do JSF. Visando aprimorar as soluções inicialmente obtidas, são propostas buscas locais e outros mecanismos de melhoria utilizados no desenvolvimento de três meta-heurísticas de diferentes categorias. Essas meta-heurísticas são: Iterated Local Search (ILS), classificada como meta-heurística de trajetória; Greedy Randomized Adaptive Search (GRASP), meta-heurística construtiva; e Artificial Bee Colony (ABC), meta-heurística populacional recentemente proposta. Esses métodos foram selecionados por alcançarem bons resultados para diversos problemas de otimização da literatura. São realizados experimentos computacionais com 600 instâncias do JSF, permitindo comparações entre os métodos de resolução. Os resultados mostram que explorar as características do problema permite que uma das regras de prioridade propostas supere a melhor regra da literatura em 81% das instâncias. As meta-heurísticas ILS, GRASP e ABC chegam a conseguir mais de 31% de melhoria sobre as soluções iniciais e a obter atrasos, em média, somente 2,24% superiores aos das soluções ótimas. Também são propostas modificações nas meta-heurísticas que permitem obter melhorias ainda mais expressivas sem aumento do tempo de execução. Adicionalmente é estudada uma versão do JSF com operações de Montagem e Desmontagem (JSFMD) e os experimentos realizados com um conjunto de 150 instâncias também indicam o bom desempenho dos métodos desenvolvidos. / The production environment addressed herein is the Flexible Job Shop (FJS), a generalization of the Job Shop (JS). In the JS environment, the jobs scheduling problem is classified by Garey; Johnson and Sethi (1976) as NP-Hard and the FJS is at least as difficult as the JS. FJS is composed of a set of jobs, each consisting of operations. Each operation must be processed individually, without interruption, in a single machine of a subset of enabled machines. The main performance criterion is minimizing the jobs tardiness. Mixed Integer Linear Programming (MILP) models are presented. These models minimize the total tardiness and the completion time of the last operation, makespan. New priority rules of jobs are proposed, as well as adaptations of rules from the literature. These rules are used by constructive heuristics and are combined with strategies aimed at exploiting specific characteristics of FSJ. In order to improve the solutions initially obtained, local searches and other improvement mechanisms are proposed and used in the development of metaheuristics of three different categories. These metaheuristics are: Iterated Local Search (ILS), classified as trajectory metaheuristic; Greedy Randomized Adaptive Search (GRASP), constructive metaheuristic, and Artificial Bee Colony (ABC), recently proposed population metaheuristic. These methods were selected owing to their good results for various optimization problems in the literature. Computational experiments using 600 FJS instances are carried out to allow comparisons between the resolution methods. The results show that exploiting the characteristics of the problem allows one of the proposed priority rules to exceed the best literature rule in about 81% of instances. Metaheuristics ILS, GRASP and ABC achieve more than 31% improvement over the initial solutions and obtain an average tardiness only 2.24% higher than the optimal solutions. Modifications in metaheuristics are proposed to obtain even more significant improvements without increased execution time. Additionally, a version called Disassembly and Assembly FSJ (DAFJS) is studied and the experiments performed with a set of 150 instances also indicate good performance of the methods developed.
159

Proposta de um modelo de planejamento agregado da produção numa usina de açúcar e álcool vinculado à flutuação de preços em mercados à vista e no mercado futuro. / A model of aggregate production planning in a sugar mill and alcohol linked the decisions of prices in future markets and present markets.

Carvalho, Marcelo Dias 09 November 2009 (has links)
O objetivo de estudo desta dissertação é o desenvolvimento de um modelo de planejamento agregado da produção que apóie as decisões de nível gerencial e de diretoria das usinas de açúcar e álcool no que tange às variedades de cana colhidas em cada semana, às compras de cana-de-açúcar de terceiros, ao tipo de transporte (próprio ou terceirizado) a se utilizar em cada semana, ao total de cana moída por semana para atendimento da demanda e aos processos (industrial e comercial) que se devem escolher para produzir e comercializar açúcar e álcool. As decisões devem ocorrer em função de preços nos mercados interno, externo e mercado futuro, do fluxo de caixa da empresa, da capacidade da usina para armazenar açúcar e álcool e da possibilidade de uso de estoque de terceiros. As decisões por compra de cana, escolha de processos e venda de produtos são tomadas semanalmente num horizonte móvel de planejamento de 52 semanas, que inclui o tempo de safra no centro-sul do Brasil (meados de março a meados de dezembro, aproximadamente 36 semanas) mais o período de entressafra (aproximadamente 16 semanas, de meados de dezembro a meados de março). A procura por melhores estratégias de comercialização de tal forma a auxiliar a tomada de decisões é uma necessidade constante dos empresários do setor, que muitas vezes são surpreendidos pelas variações de preços de açúcar e álcool no mercado interno, externo e mercado futuro. Na parte comercial, este trabalho utiliza o método Delphi de previsão de preços de açúcar e álcool que balizam as tomadas de decisão no planejamento e controle da produção das usinas de açúcar e álcool. Define-se Hedge como a operação financeira de proteger determinado ativo de uma empresa contra variações inesperadas de preços. Neste trabalho, utiliza-se um modelo de escolha de mix de produto para Hedge vinculado à lucratividade e minimização de risco denominado Modelo de Semi- Variância com análise de cenários de Markowitz. Nas decisões relacionadas com as partes agrícola, industrial e comercial, faz-se uso de um modelo de programação linear inteira mista e para resolvê-lo utiliza-se o software de programação matemática LINGO e suas interfaces com a planilha eletrônica Excel. Nas decisões vinculadas ao mix ótimo para o Hedge em cada semana, faz-se uso de um modelo de programação quadrática resolvido pelo LINGO e suas interfaces com a planilha eletrônica Excel. Um estudo de caso foi realizado numa usina de açúcar e álcool no município de Junqueirópolis (SP) para validar o modelo proposto. / The objective of study this dissertation is to develop a model of aggregate production planning to support the decisions of management and board level of sugar and alcohol plants in regard to varieties of cane harvested each week, purchasing cane of nonsugar, the type of transport (own or outsourced) to use each week, the total cane processed per week for taking care of the demand and processes (industrial and commercial) and that must be chosen to produce and sell sugar and alcohol. Decisions must occur in terms of domestic, foreign and future market prices, the company\'s cash flow and the capacity to store sugar and alcohol and the possibility of using stock to third parties. Decisions about buying cane, choice of processes and products for sale are made in a weekly mobile planning horizon of 52 weeks, which includes the time of harvest in central-southern Brazil (mid-March to mid-December, approximately 36 weeks) plus the off-season (approximately 16 weeks, from mid-December to mid March). The demand for better marketing strategies to help such decision making is a constant need for entrepreneurs in the sector, which are often surprised by the changes in prices of sugar and alcohol in the internal, external and future market. In the commercial part, this study uses the Delphi method of forecasting the price of sugar and alcohol that guides the decision-making in planning and controlling the production of sugar and alcohol plants. Hedging is defined as a financial transaction to protect certain assets of a business against unexpected changes in prices. In this work, it is used a model of choice of product mix for Hedge linked to profitability and minimizing risk named Model of Semi-Variance analysis with scenarios of Markowitz. In decisions related to the agricultural, industrial and commercial parts it is used a type of mixed integer linear programming and to solve it is used the mathematical programming software LINGO and its interface with Excel spreadsheets. In decisions related to the optimal mix for Hedge in each week, is used a quadratic programming model solved by LINGO and its interface with Excel spreadsheets. A case study was conducted in a sugar mill and alcohol in the city of Junqueirópolis (SP) to validate the proposed model.
160

Batch Processsor Scheduling - A Class Of Problems In Steel Casting Foundries

Ramasubramaniam, M 06 1900 (has links)
Modern manufacturing systems need new types of scheduling methods. While traditional scheduling methods are primarily concerned with sequencing of jobs, modern manufacturing environments provide the additional possibility to process jobs in batches. This adds to the complexity of scheduling. There are two types of batching namely: (i) serial batching (jobs may be batched if they share the same setup on a machine and one job is processed at a time. The machine which processes jobs in this manner is called as discrete processor) and (ii) parallel batching (several jobs can be processed simultaneously on a machine at a time. The machine which processes jobs in this manner is called as batch processor or batch processing machine). Parallel batching environments have attracted wide attention of the researchers working in the field of scheduling. Particularly, taking inspiration from studies of scheduling batch processors in semiconductor manufacturing [Mathirajan and Sivakumar (2006b) and Venkataramana (2006)] and in steel casting industries [Krishnaswamy et al. (1998), Shekar (1998) and Mathirajan (2002)] in the Management Studies Department of Indian Institute of Science, this thesis addresses a special problem on scheduling batch processor, observed in the steel casting manufacturing. A fundamental feature of the steel casting industry is its extreme flexibility, enabling castings to be produced with almost unlimited freedom in design over an extremely wide range of sizes, quantities and materials suited to practically every environment and application. Furthermore, the steel casting industry is capital intensive and highly competitive. From the viewpoint of throughput and utilization of the important and costly resources in the foundry manufacturing, it was felt that the process-controlled furnace operations for the melting and pouring operations as well as the heat-treatment furnace operations are critical for meeting the overall production schedules. The two furnace operations are batch processes that have distinctive constraints on job-mixes in addition to the usual capacity and technical constraints associated with any industrial processes. The benefits of effective scheduling of these batch processes include higher machine utilization, lower work-in-process (WIP) inventory, shorter cycle time and greater customer satisfaction [Pinedo (1995)]. Very few studies address the production planning and scheduling models for a steel foundry, considering the melting furnace of the pre-casting stage as the core foundry operation [Voorhis et al. (2001), Krishnaswamy et al. (1998) and Shekar (1998)]. Even though the melting and pouring operations may be considered as the core of foundry operations and their scheduling is of central importance, the scheduling of heat-treatment furnaces is also of considerable importance. This is because the processing time required at the heat treatment furnace is often longer compared to other operations in the steel-casting foundry and therefore considerably affects the scheduling, overall flow time and WIP inventory. Further, the heat-treatment operation is critical because it determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. It is also important to note that the heat-treatment operation is the only predominantly long process in the entire steel casting manufacturing process, taking up a large part of total processing time (taking up to a few days as against other processes that typically take only a few hours). Because of these, the heat-treatment operation is a major bottleneck operation in the entire steel casting process. The jobs in the WIP inventory in front of heat-treatment furnace vary widely in sizes (few grams to a ton) and dimensions (from 10 mm to 2000 mm). Furthermore, castings are primarily classified into a number of job families based on the alloy type, such as low alloy castings and high alloy castings. These job families are incompatible as the temperature requirement for low alloy and high alloy vary for similar type of heat-treatment operation required. These job families are further classified into various sub-families based on the type of heat treatment operations they undergo. These sub-families are also incompatible as each of these sub-families requires a different combination of heat-treatment operation. The widely varying job sizes, job dimensions and multiple incompatible job family characteristic introduce a high degree of complexity into scheduling heat-treatment furnace. Scheduling of heat-treatment furnace with multiple incompatible job families can have profound effect on the overall production rate as the processing time at heat-treatment operation is very much longer. Considering the complexity of the process and time consumed by the heat treatment operation, it is imperative that efficient scheduling of this operation is required in order to maximize throughput and to enhance productivity of the entire steel casting manufacturing process. This is of importance to the firm. The concerns of the management in increasing the throughput of the bottleneck machine, thereby increasing productivity, motivated us to adopt the scheduling objective of makespan. In a recent observation of heat-treatment operations in a couple of steel casting industries and the research studies reported in the literature, we noticed that the real-life problem of dynamic scheduling of heat-treatment furnace with multiple incompatible job families, non-identical job sizes, non-identical job dimensions, non-agreeable release times and due dates to maximize the throughput, higher utilization and minimize the work-in-process inventory is not at all addressed. However, there are very few studies [Mathirajan et al. (2001, 2002, 2004a, 2007)] which have addressed the problem of scheduling of heat-treatment furnace with incompatible job families and non-identical job sizes to maximize the utilization of the furnace. Due to the difference between the real-life situation on dynamic scheduling of heat-treatment furnace of the steel casting manufacturing and the research reported on the same problem, we identified three new class of batch processor problems, which are applicable to a real-life situation based on the type of heat-treatment operation(s) being carried out and the type of steel casting industry (small, medium and large scale steel casting industry) and this thesis addresses these new class of research problems on scheduling of batch processor. The first part of the thesis addresses our new Research Problem (called Research Problem 1) of minimizing makespan (Cmax) on a batch processor (BP) with single job family (SJF), non-identical job sizes (NIJS), and non-identical job dimensions (NIJD). This problem is of interest to small scale steel casting industries performing only one type of heat treatment operation such as surface hardening. Generally, there would be only a few steel casting industries which offer such type of special heat-treatment operation and thus the customer is willing to accept delay in the completion of his orders. So, the due date issues are not important for these types of industries. We formulate the problem as Mixed Integer Linear Programming (MILP) model and validate the proposed MILP model through a numerical example. In order to understand the computational intractability issue, we carry out a small computational experiment. The results of this experiment indicate that the computational time required, as a function of problem size, for solving the MILP model is non-deterministic and non-polynomial. Due to the computational intractability of the proposed MILP model, we propose five variants of a greedy heuristic algorithm and a genetic algorithm for addressing the Research Problem 1. We carry out computational experiments to obtain the performance of heuristic algorithms based on two perspectives: (i) comparison with optimal solution on small scale instances and (ii) comparison with lower bound for large scale instances. We choose five important problem parameters for the computational experiment and propose a suitable experimental design to generate pseudo problem instances. As there is no lower bound (LB) procedure for the Research Problem1, in this thesis, we develop an LB procedure that provides LB on makespan by considering both NIJS and NIJD characteristics together. Before using the proposed LB procedure for evaluating heuristic algorithms, we conduct a computational experiment to obtain the quality of the LB on makespan in comparison with optimal makespan on number of small scale instances. The results of this experiment indicate that the proposed LB procedure is efficient and could be used to obtain LB on makespan for any large scale problem. In the first perspective of the evaluation of the performance of the heuristic algorithms proposed for Research Problem 1, the proposed heuristic algorithms are run through small scale problem instances and we record the makespan values. We solve the MILP model to obtain optimal solutions for these small scale instances. For comparing the proposed heuristic algorithms we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to optimal solution and (b) average loss with respect to optimal solution in percentage. In the second perspective of the evaluation of the performance of the heuristic algorithms, the proposed heuristic algorithms are run through large scale problem instances and we record the makespan values. The LB procedure is also run through these problem instances to obtain LB on makespan. For comparing the performance of heuristic algorithms with respect to LB on makespan, we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to LB on makespan (b) average loss with respect to LB on makespan in percentage, (c) average relative percentage deviation and (d) maximum relative percentage deviation. We extend the Research Problem 1 by including additional job characteristics: job arrival time to WIP inventory area of heat-treatment furnace, due date and additional constraint on non-agreeable release time and due date (NARD). Due date considerations and the constraint on non-agreeable release times and due date (called Research Problem 2) are imperative to small scale steel casting foundries performing traditional but only one type of heat treatment operation such as annealing where due date compliance is important as many steel casting industries offer such type of heat treatment operations. The mathematical model, LB procedure, greedy heuristic algorithm and genetic algorithm proposed for Research Problem 1, including the computational experiments, are appropriately modified and\or extended for addressing Research Problem 2. Finally, we extend the Research Problem 2 is by including an additional real life dimension: multiple incompatible job families (MIJF). This new Research Problem (called Research Problem 3) is more relevant to medium and large scale steel casting foundries performing more than one type of heat treatment operations such as homogenizing and tempering, normalizing and tempering. The solution methodologies, the LB procedure and the computational experiments proposed for Research Problem 2 are further modified and enriched to address the Research Problem 3. From the detailed computational experiments conducted for each of the research problems defined in this study, we observe that: (a) the problem parameters considered in this study have influence on the performance of the heuristic algorithms, (b) the proposed LB procedure is found to be efficient, (c) the proposed genetic algorithm outperforms among the proposed heuristic algorithms (but the computational time required for genetic algorithm increases as problem size keeps increasing), and (d) in case the decision maker wants to choose an heuristic algorithm which is computationally most efficient algorithm among the proposed algorithms, the variants of greedy heuristic algorithms : SWB, SWB(NARD), SWB(NARD&MIJF) is relatively the best algorithm for Research Problem 1, Research Problem 2 and Research Problem 3 respectively.

Page generated in 0.1155 seconds