• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

"Virtual malleability" applied to MPI jobs to improve their execution in a multiprogrammed environment"

Utrera Iglesias, Gladys Miriam 10 December 2007 (has links)
This work focuses on scheduling of MPI jobs when executing in shared-memory multiprocessors (SMPs). The objective was to obtain the best performance in response time in multiprogrammed multiprocessors systems using batch systems, assuming all the jobs have the same priority. To achieve that purpose, the benefits of supporting malleability on MPI jobs to reduce fragmentation and consequently improve the performance of the system were studied. The contributions made in this work can be summarized as follows:· Virtual malleability: A mechanism where a job is assigned a dynamic processor partition, where the number of processes is greater than the number of processors. The partition size is modified at runtime, according to external requirements such as the load of the system, by varying the multiprogramming level, making the job contend for resources with itself. In addition to this, a mechanism which decides at runtime if applying local or global process queues to an application depending on the load balancing between processes of it. · A job scheduling policy, that takes decisions such as how many processes to start with and the maximum multiprogramming degree based on the type and number of applications running and queued. Moreover, as soon as a job finishes execution and where there are queued jobs, this algorithm analyzes whether it is better to start execution of another job immediately or just wait until there are more resources available. · A new alternative to backfilling strategies for the problema of window execution time expiring. Virtual malleability is applied to the backfilled job, reducing its partition size but without aborting or suspending it as in traditional backfilling. The evaluation of this thesis has been done using a practical approach. All the proposals were implemented, modifying the three scheduling levels: queuing system, processor scheduler and runtime library. The impact of the contributions were studied under several types of workloads, varying machine utilization, communication and, balance degree of the applications, multiprogramming level, and job size. Results showed that it is possible to offer malleability over MPI jobs. An application obtained better performance when contending for the resources with itself than with other applications, especially in workloads with high machine utilization. Load imbalance was taken into account obtaining better performance if applying the right queue type to each application independently.The job scheduling policy proposed exploited virtual malleability by choosing at the beginning of execution some parameters like the number of processes and maximum multiprogramming level. It performed well under bursty workloads with low to medium machine utilizations. However as the load increases, virtual malleability was not enough. That is because, when the machine is heavily loaded, the jobs, once shrunk are not able to expand, so they must be executed all the time with a partition smaller than the job size, thus degrading performance. Thus, at this point the job scheduling policy concentrated just in moldability.Fragmentation was alleviated also by applying backfilling techniques to the job scheduling algorithm. Virtual malleability showed to be an interesting improvement in the window expiring problem. Backfilled jobs even on a smaller partition, can continue execution reducing memory swapping generated by aborts/suspensions In this way the queueing system is prevented from reinserting the backfilled job in the queue and re-executing it in the future.
2

Heuristiska algoritmer för schemaläggning i real-tidssystem med hänsyn till data beroenden / Heuristic minimization of data latency in offline scheduling of periodic real-time jobs

Abdulla, Ariyan, Andersson, Erik January 2018 (has links)
The schedule for the jobs in a real-time system can have a huge impact on how the system behave. Since real-time systems are common in safety applications it is important that the scheduling is done in a valid way. Furthermore, one can enhance the performance of the applications by minimizing data latency and jitter. A challenge is that jobs in real-time systems usually have complex constraints making it too time consuming to minimize data latency and jitter to optimality. The purpose of this report is to investigate the possibility of creating high quality schedules using heuristics, with the goal to keep the computational time under one minute. This will be done by comparing three different algorithms that will be used on real scheduling instances provided by the company Arcticus. The first algorithm is a greedy heuristic, the second one a local search and the third one is a metaheuristic, simulated annealing. The results indicate that the data latency can be reduced whilst keeping the computational time below one minute.
3

Configuring mode changes in fixed-priority preemptively scheduled real-time systems = Configuração de mudanças de modo em sistemas de tempo real escalonados com política preemptiva de prioridade fixa / Configuração de mudanças de modo em sistemas de tempo real escalonados com política preemptiva de prioridade fixa

Massaro Júnior, Flávio Rubens, 1976- 27 August 2018 (has links)
Orientadores: Paulo Sérgio Martins Pedro, Edson Luiz Ursini / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-27T04:51:09Z (GMT). No. of bitstreams: 1 MassaroJunior_FlavioRubens_M.pdf: 3302871 bytes, checksum: aa117bbaac53f7ead30d1a21700e03aa (MD5) Previous issue date: 2015 / Resumo: Modos de operação e mudanças de modo são uma abstração útil para permitir que sistemas de tempo real sejam flexíveis e configuráveis. Trabalhos prévios em escalonamento preemptivo com prioridades fixas permitem que as tarefas passem de um modo de operação para outro provendo garantias de tempo real. No entanto, a configuração adequada dos parâmetros críticos, tais como o offset de uma tarefa, apesar de trabalhos anteriores terem abordado este assunto, permanece uma lacuna a ser explorada. Sem um método que automatize esta etapa do processo, garantindo ao mesmo tempo que os requisitos básicos sejam atendidos, a adoção plena de mudanças de modo em sistemas de tempo real permanece limitada a sistemas relativamente simples, com um conjuntos de tarefas limitado. Propomos um método para atribuir offsets às tarefas em uma mudança modo, através de uma abordagem Metaheurística (algoritmos genéticos). Este método permite a configuração e/ou a minimização da latência de pior caso de uma mudança modo. A latência de uma mudança de modo é um parâmetro crítico para ser minimizado, uma vez que durante a mudança de modo o sistema oferece funcionalidade limitada, uma vez que o conjunto de tarefas está parcialmente em operação. Também elaboramos uma classificação das mudanças de modo de acordo com as necessidades das aplicações. Esta classificação, quando aplicada a uma série de estudos de casos, permitiu validar a abordagem de minimização/configuração, estender a classificação anteriormente existente e demonstrar que o método é flexível, já que pode acomodar uma ampla variedade de tipos de mudanças de modo / Abstract: Modes of operation and mode-changes are a useful abstraction to enable configurable, flexible real-time systems. Substantial work on the fixed priority preemptive scheduling approach allowed tasks across a mode-change to be provided with real-time guarantees. However, the proper configuration of critical parameters such as task offsets, despite initial work, remains a gap in research. Without a method that automates this design step, while assuring that the basic requirements are met, the full adoption of mode-changes in real-time systems remains limited to relatively simple systems with limited task sets. We propose a method to assign offsets to tasks across a mode-change, using a metaheuristic approach (genetic algorithms). This method allows the configuration and/or the minimization of the worst-case latency of a mode-change. The latency of a mode change is a critical parameter to be minimized, since during the mode change the system offers limited functionality due to the fact that the task set is still incomplete. We also provide a classification of mode changes according to applications¿ requirements. This classification was useful, once applied to a number of case studies, both to validate the configuration approach and to a greater extent to show that the method is flexible in that it can accommodate a wide variety of types of mode-changes / Mestrado / Mestre em Tecnologia
4

Deterministic Scheduling Of Parallel Discrete And Batch Processors

Venkataramana, M 07 1900 (has links)
Scheduling concerns the allocation of limited resources to tasks over time. In manufacturing systems, scheduling is nothing but assigning the jobs to the available processors over a period of time. Our research focuses on scheduling in systems of parallel processors which is challenging both from the theoretical and practical perspectives. The system of parallel processors is a common occurrence in different types of modern manufacturing systems such as job shop, batch shop and mass production. A variety of important and challenging problems with realistic settings in a system of parallel processors are considered. We consider two types of processors comprising discrete and batch processors. The processor which produces one job at a time is called a discrete processor. Batch processor is a processor that can produce several jobs simultaneously by keeping jobs in a batch form which is commonly seen in semiconductor manufacturing, heat treatment operations and also in chemical processing industries. Our aim is to develop efficient solution methodologies (heuristics/metaheuristics) for three different problems in the thesis. The first two problems consider the objective of minimizing total weighted tardiness in cases of discrete and batch processors where customer delivery time performance is critical. The third problem deals with the objective of minimizing the total weighted completion time in the case of batch processors to reduce work-in-process inventory. Specifically, the first problem deals with the scheduling of parallel identical discrete processors to minimize total weighted tardiness. We develop a metaheuristic based on Ant Colony Optimization(ACO) approach to solve the problem and compare it with the available best heuristics in the literature such as apparent tardiness cost and modified due date rules. An extensive experimentation is conducted to evaluate the performance of the ACO approach on different problem sizes with varied tardiness factors. Our experimentation shows that the proposed ant conony optimization algorithm yields promising results as compared to the best of the available heuristics. The second problem concerns with the scheduling of jobs to parallel identical batch processors for minimizing the total weighted tardiness. It is assumed that the jobs are incompatible in respect of job families indicating that jobs from different families cannot be processed together. We decompose the problem into two stages including batch formation and batch scheduling as in the literature. Ant colony optimization based heuristics are developed in which ACO is used to solve the batch scheduling problem. Our computational experimentation shows that the proposed five ACO based heuristics perform better than the available best traditional dispatching rule called ATC-BATC rule. The third scheduling problem is to minimize the total weighted completion time in a system of parallel identical batch processors. In the real world manufacturing system, jobs to be scheduled come in lots with different job volumes(i.e number of jobs) and priorities. The real settings of lots and high batch capacity are considered in this problem. This scheduling problem is formulated as a mixed integer non-linear program. We develop a solution framework based on the decomposition approach for this problem. Two heuristics are proposed based on the proposed decomposition approach and the performance of these heuristics is evaluated in the cases of two and three batch processors by comparing with the solution of LINGO solver.
5

Heuristic Scheduling Algorithms For Parallel Heterogeneous Batch Processors

Mathirajan, M 11 1900 (has links)
In the last decade, market pressures for greater variety of products forced a gradual shift from continuous manufacturing to batch manufacturing in various industries. Consequently batch scheduling problems have attracted the attention of researchers in production and operations management. This thesis addresses the scheduling of parallel non-identical batch processors in the presence of dynamic job arrivals, incompatible job-families and non-identical job sizes. This problem abstracts the scheduling of heat-treatment furnace operations of castings in a steel foundry. The problem is of considerable interest in this sector as a large proportion of the total production time is spent in heat treatment processing. This problem is also encountered in other industrial settings such as burn-in operation in the final testing stage of semiconductor manufacturing, and manufacturing of steel, ceramics, aircraft parts, footwear, etc. A detailed literature review and personal communications with experts revealed that this class of batch scheduling problems have not been addressed hitherto. A major concern in the management of foundries is to maximize throughput and reduce flow time and work-in-process inventories. Therefore we have chosen the primary scheduling objective to be the utilization of batch processors and as secondary objectives the minimization of overall flow time and weighted average waiting time per job. This formulation can be considered as an extension of problems studied by DOBSON AND NAMBINADOM (1992), UZSOY (1995), ZEE et a/. (1997) and MEHTA AND UZSOY (1998). Our effort to carefully catalogue the large number of variants of deterministic batch scheduling problems led us to the development of a taxonomy and notation. Not surprisingly, we are able to show that our problem is NP-hard and is therefore in the company of many scheduling problems that are difficult to solve. Initially two heuristic algorithms, one a mathematical programming based heuristic algorithm (MPHA) and the other a greedy heuristic algorithm were developed. Due to the computational overheads in the implementation of MPHA when compared with the greedy heuristic, we chose to focus on the latter as the primary scheduling methodology. Preliminary experimentation led us to the observation that the performance of greedy heuristics depends critically on the selection of job-families. So eight variants of the greedy heuristic that differ mainly in the decision on "job-family selection" were proposed. These eight heuristics are basically two sets {Al, A2, A3, A4} and the modified (MAI, MA2, MA3, MA4}, which differ only on how the "job-family" index, weighted shortest processing time, is computed. For evaluating the performance of the eight heuristics, computational experiments were carried out. The analysis of the experimental data is presented in two perspectives. The goal of the first perspective was to evaluate the absolute quality of the solutions obtained by the proposed heuristic algorithms when compared with estimated optimal solutions. The second perspective was to compare the relative performance of the proposed heuristics. The test problems generated were designed to reflect real-world scheduling problems that we have observed in the steel-casting industry. Three important problem parameters for the test set generation are the number of jobs [n], job-priority [P], and job-family [F]. We considered 5 different levels for n, 2 different levels for P and 2 different levels for F. The test set reflects (i) the size of the jobs vary uniformly (ii) there are two batch processors and (iii) five incompatible job-families with different processing times. 15 problem instances were generated for each level of (n, P, and F). Out of many procedures available in the literature for estimating optimal value for combinatorial optimization problems, we used the procedure based on Weibull distribution as discussed in Rardin and Uzsoy (2001). For each problem instance of the randomly generated 300 problem instances, 15 feasible solutions (i.e., the average utilization of batch processors (AUBP)) were obtained using "random decision rule for first two stages and using a "best-fit heuristic' for the last stage of the scheduling problem. These 15 feasible solutions were used to estimate the optimal value. The generated 15 feasible solutions are expected to provide the estimated optimal value of the problem instance with a very high probability. Both average performance and worst-case performance of the heuristics indicated that, the heuristic algorithms A3 and A4, on the average yielded better utilization than the estimated optimal value. This indicates that the Weibull-based technique may have yielded conservative estimates of the optimal value. Further, the other heuristic algorithms found inferior solutions when compared with the estimated optimal value. But the deviations were very small. From this, we may infer that all the proposed heuristic algorithms are acceptable. The relative evaluation of heuristics was in terms of both computational effort and the quality of the solution. For the heuristics, it was clear that the computational burden is low enough on the average to run all the proposed heuristics on each problem instance and select the best solution. Also, it is observed that any algorithm from the first set of {Al, A2, A3, and A4} takes more computational time than any one from the second set {MAI, MA2, MA3, and MA4}. Regarding solution quality, the following inferences were made: ٭ In general the heuristic algorithms are sensitive to the choice of problem factors with respect to all the scheduling objectives. ٭ The three algorithms A3, MA4 and MAI are observed to be superior with respect to the scheduling objectives: maximizing average utilization of batch processors (AUBP), minimization of overall flow time (OFT) and minimizing weighted average waiting time (WAWT) respectively. Further, the heuristic algorithm MAI turns out to be the best choice if we trade-off all three objectives AUBP, OFT and WAWT. Finally we carried out simple sensitivity analyses experiments in order to understand the influence of some parameters of the scheduling on the performance of the heuristic algorithms. These were related to one at a time changes in (1) job-size distribution, (2) capacities of batch processors and (3) processing time of job-families. From the analyses it appears that there is an influence of changes in these input parameters. The results of the sensitivity analyses can be used to guide the selection of a heuristic for a particular combination of input parameters. For example, if we have to pick a single heuristic algorithm, then MAI is the best choice when considering the performance and the robustness indicated by the sensitivity analysis. In summary, this thesis examined a problem arising in the scheduling of heat-treatment operations in the steel-casting industry. This problem was abstracted to a class of deterministic batch scheduling problems. We analyzed the computational complexity of this problem and showed that it is NP-hard and therefore unlikely to admit a scalable exact method. Eight variants of a fast greedy heuristic were designed to solve the scheduling problem of interest. Extensive computational experiments were carried out to compare the performance of the heuristics with estimated optimal values (using the Weibull technique) and also for relative effectiveness and this showed that the heuristics are capable of consistently obtaining near-estimated) optimal solutions with very low computational burden for the solution of large scale problems. Finally, a comprehensive sensitivity analysis was carried out to study the influence of a few parameters, by changing them one at a time, on the performance of the heuristic algorithms. This type of analysis gives users some confidence in the robustness of the proposed heuristics.
6

Прилог аутоматској паралелизацији секвенцијалног машинског кода / Prilog automatskoj paralelizaciji sekvencijalnog mašinskog koda / An approach to automatic parallelization of sequential machine code

Marinković Vladimir 24 September 2018 (has links)
<p>Докторска теза анализира подршку за вишејезгарне и многојезгарне системе у циљу повећања искоришћења њихове снаге. Предмет истраживања је проналажење решења које би без уплитања програмера (аутоматски) паралелизовало постојеће секвенцијалне програме на бинарном нивоу који се извршавају на једном језгру (или процесору). Резултат истраживања је израда решења и алата за паралелизацију секвенцијалног машинког кода, који самостално стварају програме који се извршавају паралелно на више језгара вишејезгарног процесора, и тиме постижу балансирано оптерећење процесора. Основни циљ је добијање убрзања извршења програмског кода на вишејезгарном процесору ради омогућавања рада у реланом времену за задата ограничења. Добијено решење би се могло искористити и за смањење потрошње смањивањем радног такта процесора уз задржавање полазног времена извршења програма.</p> / <p>Doktorska teza analizira podršku za višejezgarne i mnogojezgarne sisteme u cilju povećanja iskorišćenja njihove snage. Predmet istraživanja je pronalaženje rešenja koje bi bez uplitanja programera (automatski) paralelizovalo postojeće sekvencijalne programe na binarnom nivou koji se izvršavaju na jednom jezgru (ili procesoru). Rezultat istraživanja je izrada rešenja i alata za paralelizaciju sekvencijalnog mašinkog koda, koji samostalno stvaraju programe koji se izvršavaju paralelno na više jezgara višejezgarnog procesora, i time postižu balansirano opterećenje procesora. Osnovni cilj je dobijanje ubrzanja izvršenja programskog koda na višejezgarnom procesoru radi omogućavanja rada u relanom vremenu za zadata ograničenja. Dobijeno rešenje bi se moglo iskoristiti i za smanjenje potrošnje smanjivanjem radnog takta procesora uz zadržavanje polaznog vremena izvršenja programa.</p> / <p>PhD thesis analyzes a support for multicore and manycore systems in terms<br />of better processing power utilization. Purpose of this study is finding a<br />solution for automatic parallelization of existing sequential code which<br />executes on single core (or processor), at the binary level. The research<br />intents to develop a solution and tools for parallelization of the sequential<br />machine code, which can create a program running simultaneously on all the<br />cores of the multi-core processor, and for achieving optimal load-balancing.<br />The primary goal is obtaining execution speedup of the program running on<br />the multicore processor, for meeting real-time processing constraints. Given<br />solution could be also used for energy saving, by lowering system clock and<br />keeping program execution runtime.</p>
7

Batch Processsor Scheduling - A Class Of Problems In Steel Casting Foundries

Ramasubramaniam, M 06 1900 (has links)
Modern manufacturing systems need new types of scheduling methods. While traditional scheduling methods are primarily concerned with sequencing of jobs, modern manufacturing environments provide the additional possibility to process jobs in batches. This adds to the complexity of scheduling. There are two types of batching namely: (i) serial batching (jobs may be batched if they share the same setup on a machine and one job is processed at a time. The machine which processes jobs in this manner is called as discrete processor) and (ii) parallel batching (several jobs can be processed simultaneously on a machine at a time. The machine which processes jobs in this manner is called as batch processor or batch processing machine). Parallel batching environments have attracted wide attention of the researchers working in the field of scheduling. Particularly, taking inspiration from studies of scheduling batch processors in semiconductor manufacturing [Mathirajan and Sivakumar (2006b) and Venkataramana (2006)] and in steel casting industries [Krishnaswamy et al. (1998), Shekar (1998) and Mathirajan (2002)] in the Management Studies Department of Indian Institute of Science, this thesis addresses a special problem on scheduling batch processor, observed in the steel casting manufacturing. A fundamental feature of the steel casting industry is its extreme flexibility, enabling castings to be produced with almost unlimited freedom in design over an extremely wide range of sizes, quantities and materials suited to practically every environment and application. Furthermore, the steel casting industry is capital intensive and highly competitive. From the viewpoint of throughput and utilization of the important and costly resources in the foundry manufacturing, it was felt that the process-controlled furnace operations for the melting and pouring operations as well as the heat-treatment furnace operations are critical for meeting the overall production schedules. The two furnace operations are batch processes that have distinctive constraints on job-mixes in addition to the usual capacity and technical constraints associated with any industrial processes. The benefits of effective scheduling of these batch processes include higher machine utilization, lower work-in-process (WIP) inventory, shorter cycle time and greater customer satisfaction [Pinedo (1995)]. Very few studies address the production planning and scheduling models for a steel foundry, considering the melting furnace of the pre-casting stage as the core foundry operation [Voorhis et al. (2001), Krishnaswamy et al. (1998) and Shekar (1998)]. Even though the melting and pouring operations may be considered as the core of foundry operations and their scheduling is of central importance, the scheduling of heat-treatment furnaces is also of considerable importance. This is because the processing time required at the heat treatment furnace is often longer compared to other operations in the steel-casting foundry and therefore considerably affects the scheduling, overall flow time and WIP inventory. Further, the heat-treatment operation is critical because it determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. It is also important to note that the heat-treatment operation is the only predominantly long process in the entire steel casting manufacturing process, taking up a large part of total processing time (taking up to a few days as against other processes that typically take only a few hours). Because of these, the heat-treatment operation is a major bottleneck operation in the entire steel casting process. The jobs in the WIP inventory in front of heat-treatment furnace vary widely in sizes (few grams to a ton) and dimensions (from 10 mm to 2000 mm). Furthermore, castings are primarily classified into a number of job families based on the alloy type, such as low alloy castings and high alloy castings. These job families are incompatible as the temperature requirement for low alloy and high alloy vary for similar type of heat-treatment operation required. These job families are further classified into various sub-families based on the type of heat treatment operations they undergo. These sub-families are also incompatible as each of these sub-families requires a different combination of heat-treatment operation. The widely varying job sizes, job dimensions and multiple incompatible job family characteristic introduce a high degree of complexity into scheduling heat-treatment furnace. Scheduling of heat-treatment furnace with multiple incompatible job families can have profound effect on the overall production rate as the processing time at heat-treatment operation is very much longer. Considering the complexity of the process and time consumed by the heat treatment operation, it is imperative that efficient scheduling of this operation is required in order to maximize throughput and to enhance productivity of the entire steel casting manufacturing process. This is of importance to the firm. The concerns of the management in increasing the throughput of the bottleneck machine, thereby increasing productivity, motivated us to adopt the scheduling objective of makespan. In a recent observation of heat-treatment operations in a couple of steel casting industries and the research studies reported in the literature, we noticed that the real-life problem of dynamic scheduling of heat-treatment furnace with multiple incompatible job families, non-identical job sizes, non-identical job dimensions, non-agreeable release times and due dates to maximize the throughput, higher utilization and minimize the work-in-process inventory is not at all addressed. However, there are very few studies [Mathirajan et al. (2001, 2002, 2004a, 2007)] which have addressed the problem of scheduling of heat-treatment furnace with incompatible job families and non-identical job sizes to maximize the utilization of the furnace. Due to the difference between the real-life situation on dynamic scheduling of heat-treatment furnace of the steel casting manufacturing and the research reported on the same problem, we identified three new class of batch processor problems, which are applicable to a real-life situation based on the type of heat-treatment operation(s) being carried out and the type of steel casting industry (small, medium and large scale steel casting industry) and this thesis addresses these new class of research problems on scheduling of batch processor. The first part of the thesis addresses our new Research Problem (called Research Problem 1) of minimizing makespan (Cmax) on a batch processor (BP) with single job family (SJF), non-identical job sizes (NIJS), and non-identical job dimensions (NIJD). This problem is of interest to small scale steel casting industries performing only one type of heat treatment operation such as surface hardening. Generally, there would be only a few steel casting industries which offer such type of special heat-treatment operation and thus the customer is willing to accept delay in the completion of his orders. So, the due date issues are not important for these types of industries. We formulate the problem as Mixed Integer Linear Programming (MILP) model and validate the proposed MILP model through a numerical example. In order to understand the computational intractability issue, we carry out a small computational experiment. The results of this experiment indicate that the computational time required, as a function of problem size, for solving the MILP model is non-deterministic and non-polynomial. Due to the computational intractability of the proposed MILP model, we propose five variants of a greedy heuristic algorithm and a genetic algorithm for addressing the Research Problem 1. We carry out computational experiments to obtain the performance of heuristic algorithms based on two perspectives: (i) comparison with optimal solution on small scale instances and (ii) comparison with lower bound for large scale instances. We choose five important problem parameters for the computational experiment and propose a suitable experimental design to generate pseudo problem instances. As there is no lower bound (LB) procedure for the Research Problem1, in this thesis, we develop an LB procedure that provides LB on makespan by considering both NIJS and NIJD characteristics together. Before using the proposed LB procedure for evaluating heuristic algorithms, we conduct a computational experiment to obtain the quality of the LB on makespan in comparison with optimal makespan on number of small scale instances. The results of this experiment indicate that the proposed LB procedure is efficient and could be used to obtain LB on makespan for any large scale problem. In the first perspective of the evaluation of the performance of the heuristic algorithms proposed for Research Problem 1, the proposed heuristic algorithms are run through small scale problem instances and we record the makespan values. We solve the MILP model to obtain optimal solutions for these small scale instances. For comparing the proposed heuristic algorithms we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to optimal solution and (b) average loss with respect to optimal solution in percentage. In the second perspective of the evaluation of the performance of the heuristic algorithms, the proposed heuristic algorithms are run through large scale problem instances and we record the makespan values. The LB procedure is also run through these problem instances to obtain LB on makespan. For comparing the performance of heuristic algorithms with respect to LB on makespan, we use the performance measures: (a) number of times the proposed heuristic algorithm solution equal to LB on makespan (b) average loss with respect to LB on makespan in percentage, (c) average relative percentage deviation and (d) maximum relative percentage deviation. We extend the Research Problem 1 by including additional job characteristics: job arrival time to WIP inventory area of heat-treatment furnace, due date and additional constraint on non-agreeable release time and due date (NARD). Due date considerations and the constraint on non-agreeable release times and due date (called Research Problem 2) are imperative to small scale steel casting foundries performing traditional but only one type of heat treatment operation such as annealing where due date compliance is important as many steel casting industries offer such type of heat treatment operations. The mathematical model, LB procedure, greedy heuristic algorithm and genetic algorithm proposed for Research Problem 1, including the computational experiments, are appropriately modified and\or extended for addressing Research Problem 2. Finally, we extend the Research Problem 2 is by including an additional real life dimension: multiple incompatible job families (MIJF). This new Research Problem (called Research Problem 3) is more relevant to medium and large scale steel casting foundries performing more than one type of heat treatment operations such as homogenizing and tempering, normalizing and tempering. The solution methodologies, the LB procedure and the computational experiments proposed for Research Problem 2 are further modified and enriched to address the Research Problem 3. From the detailed computational experiments conducted for each of the research problems defined in this study, we observe that: (a) the problem parameters considered in this study have influence on the performance of the heuristic algorithms, (b) the proposed LB procedure is found to be efficient, (c) the proposed genetic algorithm outperforms among the proposed heuristic algorithms (but the computational time required for genetic algorithm increases as problem size keeps increasing), and (d) in case the decision maker wants to choose an heuristic algorithm which is computationally most efficient algorithm among the proposed algorithms, the variants of greedy heuristic algorithms : SWB, SWB(NARD), SWB(NARD&MIJF) is relatively the best algorithm for Research Problem 1, Research Problem 2 and Research Problem 3 respectively.

Page generated in 0.1051 seconds