• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 21
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 26
  • 17
  • 15
  • 13
  • 13
  • 12
  • 12
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Optimalizace milkrunových jízd v automobilovém průmyslu / Milkrun optimization in automotive industry

Václavů, Tomáš January 2014 (has links)
At the basic there is a connection between operation research and automotive logistics. The supply chain in this type of industry is hard to coordinate and flexibility is considered as a main key indicator. Companies increase their competitiveness through the optimization of supply chain and try to take control of every flow or a process. This idea goes throughout the departments. The main concept, which is about a connection of whole supply chain with information flow means, that there isn't need of stock and every input is highly available on time and in requested quantity and quality. This thesis describes inbound logistics and introduces some of the methods used for planning vehicle routes. In our situation which is based on car maker data set was chosen a group of a suppliers for the analysis. We used vehicle routing problem with some modifications, heuristics based on the covering problem. Founded solution was compared with the present state.
42

Optimalizace rozvozu pekárenských výrobků / Optimization of distribution of bakery goods

Gebauerová, Monika January 2010 (has links)
This thesis deals with the optimization of distribution of bakery products. Firstly there are the fundamental types of vehicle routing problems and their optimization models introduced. Next part is dedicated to heuristic algorithms. The heuristic methods are introduced in general, then there are the chosen methods described. Later there are two chosen algorithms formulated. First one based on the nearest neighbour method and another one based on the savings algorithm. Both of algorithms were programmed in the Visual Basic of Applications MS Excel 2010. These algorithms were applied for the solution of the real problem dealing with the distribution of goods. The bakery company has provided the data about its customers for this purpose. The last part of this thesis is dedicated to the summary and comparison of the solution of the assigned problem that was gained by the proposed algorithms with the solution that the bakery company has put into practice.
43

A equação unidimensional de difusão de nêutrons com modelo multigrupo de energia e meio heterogêneo : avaliação do fluxo para problemas estacionários e de cinética / The one dimensional diffusion equation with multi group energy model and heterogeneous media: flux evaluation to stationary and kinetic problems

Ceolin, Celina January 2014 (has links)
Na presente tese é resolvida a equação de difusão de nêutrons estacionária, bem como problemas de cinética, em geometria unidimensional cartesiana multi-região considerando o modelo de multigrupos de energia. Um dos objetivos e inovação neste trabalho é a obtenção de uma solução aproximada com estimativa de erro, controle de precisão e na forma de uma expressão analítica. Com esse tipo de solução não há a necessidade de recorrer a esquemas de interpolação, geralmente necessários em caso de discretizações do domínio. O fluxo de nêutrons é expandido em uma série de Taylor cujos coeficientes são encontrados utilizando a equação diferencial e as condições de contorno e interface. O domínio é dividido em várias células, cujo tamanho e o grau do polinômio são ajustáveis de acordo com a precisão requerida. Para resolver o problema de autovalor é utilizado o método da potência. A metodologia é aplicada em um benchmark que consiste na solução da equação de difusão como condição inicial e na solução de problemas de cinética para diferentes transientes. Os resultados são comparados com sucesso com resultados da literatura. A convergência da série é garantida pela aplicação de um raciocínio baseado no critério de Lipschitz para funções contínuas. Cabe ressaltar que a solução obtida, em conjunto com a análise da convergência, mostra a solidez e a precisão dessa metodologia. / In the present dissertation the one-dimensional neutron diffusion equation for stationary and kinetic problems in a multi-layer slab has been solved considering the multi-group energy model. One of the objectives and innovation in this work is to obtain an approximate solution with error estimation, accuracy control and in the form of an analytical expression. With this solution there is no need for interpolation schemes, which are usually needed in case of discretization of the domain. The neutron flux is expanded in a Taylor series whose coefficients are found using the differential equation and the boundary and interface conditions. The domain is divided into several layers, whose size and the polynomial order can be adjusted according to the required accuracy. To solve the eigenvalue problem the conventional power method has been used. The methodology is applied in a benchmark problem consisting of the solution of the diffusion equation as an initial condition and solving kinetic problems for different transients. The results are compared successfully with the ones in the literature. The convergence of the series is guaranteed by applying a criterion based on the Lipschitz criterion for continuous functions. Note that the solution obtained, together with the convergence analysis, shows the robustness and accuracy of this methodology.
44

Escalonamento de tarefas com localidade de dados em grids / Task scheduling with data locality in grids

Póvoa, Marcelo Galvão, 1990- 02 April 2015 (has links)
Orientador: Eduardo Candido Xavier / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-27T04:49:46Z (GMT). No. of bitstreams: 1 Povoa_MarceloGalvao_M.pdf: 1965830 bytes, checksum: 7509ae1701df384bfdc3d415ecd4eda8 (MD5) Previous issue date: 2015 / Resumo: Sistemas computacionais conhecidos como Data Grids fornecem uma infraestrutura computacional distribuída para processamento e armazenamento de dados, com várias aplicações envolvendo computação em larga escala. Devido ao uso de um grande volume de dados, é necessário não apenas um escalonamento eficiente de tarefas, mas também uma distribuição inteligente de réplicas dos dados para se atingir o melhor desempenho. Esses dois problemas já foram extensivamente estudados de forma independente na literatura, mas estamos concentrados em um formulação integrada em um problema estático, de forma a otimizar uma única função objetivo. Primeiramente, mostramos que este problema não pode admitir um algoritmo aproximado. Porém, considerando uma versão restrita do problema, apresentamos um algoritmo aproximado original com fator de aproximação constante. Também fazemos um estudo de algoritmos aproximados para problemas relacionados disponíveis na literatura. Sob um aspecto mais prático, introduzimos duas heurísticas originais para o problema. A primeira é baseada no agrupamento de máquinas próximas em clusters, enquanto a segunda procura identificar grupos de dados frequentemente acessados em conjunto. Comparamos esses algoritmos com duas abordagens adaptadas da literatura, através de simulações computacionais em um grande conjunto de instâncias baseadas em grids reais. Mostramos que nossa primeira heurística costuma obter melhores soluções que as outras com boa eficiência de tempo, enquanto a segunda heurística é ainda mais rápida e ainda obtém soluções competitivas / Abstract: Computational systems known as Data Grids provide a flexible, distributed computing infrastructure for processing and storage and has many applications in large-scale computing. Due to the use of great amounts of data, not only efficient task scheduling but also thorough file replication are crucial for achieving the best performance. Both these problems have already been studied independently in the literature, but we are interested in a combined formulation as a static problem, in order to minimize a single objective function. First, we show that this problem does not admit an approximation algorithm. However, considering a restricted version of the problem, we provide a constant ratio approximation algorithm. We also conduct a study of approximation algorithms for related problems avaliable in the literature. On a more practical side, we introduce two novel heuristics for the problem. The first is based on grouping neighbor nodes into clusters, while the second tries to identify groups of files frequently accessed together. We compare these algorithms with two adapted approaches from other works in the literature by doing computational simulations using an extensive set of instances based on real grids. We show that our first heuristic often obtains the best solutions with good time efficiency, while the second is even faster and still provides competitive solutions / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
45

An Investigation of Fast and Frugal Heuristics for New Product Project Selection

Albar, Fatima Mohammed 05 June 2013 (has links)
In the early stages of new product development, project selection is dominantly based on managerial intuition, rather than on analytic approaches. As much as 90% of all product ideas are rejected before they are formally assessed. However, to date, little is known about the product screening heuristics and screening criteria managers use: it has been suggested that their decision process resembles the "fast and frugal" heuristics identified in recent psychological research, but no empirical research exists. A major part of the product innovation pipeline is thus poorly understood. This research contributes to closing this gap. It uses cognitive task analysis for an in-depth analysis of the new product screening heuristics of twelve experienced decision makers in 66 decision cases. Based on the emerging data, an integrated model of their project screening heuristics is created. Results show that experts adapt their heuristics to the decision at hand. In doing so, they use a much smaller set of decision criteria than discussed in the product development literature. They also combine heuristics into decision approaches that are simple, but more complex than "fast and frugal" strategies. By opening the black box of project screening this research enables improved project selection practices.
46

Biagrupamento heurístico e coagrupamento baseado em fatoração de matrizes: um estudo em dados textuais / Heuristic biclustering and coclustering based on matrix factorization: a study on textual data

Ramos Diaz, Alexandra Katiuska 16 October 2018 (has links)
Biagrupamento e coagrupamento são tarefas de mineração de dados que permitem a extração de informação relevante sobre dados e têm sido aplicadas com sucesso em uma ampla variedade de domínios, incluindo aqueles que envolvem dados textuais -- foco de interesse desta pesquisa. Nas tarefas de biagrupamento e coagrupamento, os critérios de similaridade são aplicados simultaneamente às linhas e às colunas das matrizes de dados, agrupando simultaneamente os objetos e os atributos e possibilitando a criação de bigrupos/cogrupos. Contudo suas definições variam segundo suas naturezas e objetivos, sendo que a tarefa de coagrupamento pode ser vista como uma generalização da tarefa de biagrupamento. Estas tarefas, quando aplicadas nos dados textuais, demandam uma representação em um modelo de espaço vetorial que, comumente, leva à geração de espaços caracterizados pela alta dimensionalidade e esparsidade, afetando o desempenho de muitos dos algoritmos. Este trabalho apresenta uma análise do comportamento do algoritmo para biagrupamento Cheng e Church e do algoritmo para coagrupamento de decomposição de valores em blocos não negativos (\\textit{Non-Negative Block Value Decomposition} - NBVD), aplicado ao contexto de dados textuais. Resultados experimentais quantitativos e qualitativos são apresentados a partir das experimentações destes algoritmos em conjuntos de dados sintéticos criados com diferentes níveis de esparsidade e em um conjunto de dados real. Os resultados são avaliados em termos de medidas próprias de biagrupamento, medidas internas de agrupamento a partir das projeções nas linhas dos bigrupos/cogrupos e em termos de geração de informação. As análises dos resultados esclarecem questões referentes às dificuldades encontradas por estes algoritmos nos ambiente de experimentação, assim como se são capazes de fornecer informações diferenciadas e úteis na área de mineração de texto. De forma geral, as análises realizadas mostraram que o algoritmo NBVD é mais adequado para trabalhar com conjuntos de dados em altas dimensões e com alta esparsidade. O algoritmo de Cheng e Church, embora tenha obtidos resultados bons de acordo com os objetivos do algoritmo, no contexto de dados textuais, propiciou resultados com baixa relevância / Biclustering e coclustering are data mining tasks that allow the extraction of relevant information about data and have been applied successfully in a wide variety of domains, including those involving textual data - the focus of interest of this research. In biclustering and coclustering tasks, similarity criteria are applied simultaneously to the rows and columns of the data matrices, simultaneously grouping the objects and attributes and enabling the discovery of biclusters/coclusters. However their definitions vary according to their natures and objectives, being that the task of coclustering can be seen as a generalization of the task of biclustering. These tasks applied in the textual data demand a representation in a model of vector space, which commonly leads to the generation of spaces characterized by high dimensionality and sparsity and influences the performance of many algorithms. This work provides an analysis of the behavior of the algorithm for biclustering Cheng and Church and the algorithm for coclustering non-negative block decomposition (NBVD) applied to the context of textual data. Quantitative and qualitative experimental results are shown, from experiments on synthetic datasets created with different sparsity levels and on a real data set. The results are evaluated in terms of their biclustering oriented measures, internal clustering measures applied to the projections in the lines of the biclusters/coclusters and in terms of generation of information. The analysis of the results clarifies questions related to the difficulties faced by these algorithms in the experimental environment, as well as if they are able to provide differentiated information useful to the field of text mining. In general, the analyses carried out showed that the NBVD algorithm is better suited to work with datasets in high dimensions and with high sparsity. The algorithm of Cheng and Church, although it obtained good results according to its own objectives, provided results with low relevance in the context of textual data
47

Otimização multiobjetivo dos parâmetros do sistema de suspensão de um modelo de veículo completo através de um algoritmo meta-heurístico

Fossati, Giovani Gaiardo January 2017 (has links)
O presente trabalho otimizou os parâmetros concentrados do sistema de suspensão de um modelo de veículo completo, representando um automóvel de passeio que trafega a uma velocidade constante por um determinado perfil de pista previsto na norma ISO 8608, 1995, através da utilização de um algoritmo meta-heurístico de otimização multiobjetivo. Duas rotinas numérico-computacionais foram desenvolvidas, visando realizar tal otimização tanto no domínio do tempo quanto no domínio da frequência. A utilização de algoritmos meta-heurísticos vem ganhando espaço na otimização de sistemas mecânicos, proporcionando rapidez e precisão na obtenção de resultados ótimos. Ao se combinar um algoritmo de otimização a um modelo que represente satisfatoriamente um sistema mecânico, obtém-se uma ferramenta indicadora dos parâmetros de máxima eficiência do sistema, que pode ser utilizada em inúmeras aplicações. Pretendeu-se, com a integração de rotinas de análise dinâmica nos domínios do tempo e da frequência ao algoritmo genético de otimização multiobjetivo NSGA-II, desenvolvido por Deb et al., 2002, a obtenção de duas fronteiras ótimas de Pareto. Estas fronteiras consistem no conjunto de soluções não dominadas que minimizam as seguintes funções objetivo: o valor RMS ponderado da aceleração vertical do assento do motorista, o valor RMS da média do fator de amplificação dinâmica das quatro rodas do modelo e o máximo deslocamento relativo entre cada roda e a carroceria. O método proposto por Shinozuka e Jan, 1972, é utilizado para a obtenção do perfil de irregularidades da pista no domínio do tempo a partir das equações de densidade espectral de potência (PSD) que representam as diferentes classes de pavimentos. O método de Newmark, 1959, é utilizado para resolver a equação diferencial de movimento no domínio do tempo e obter a resposta dinâmica do modelo a tais irregularidades. O comportamento dinâmico do modelo de veículo no domínio da frequência foi obtido através da utilização da função de resposta em frequência (FRF) do modelo de veículo analisado. Os resultados demonstraram a capacidade de ambas as rotinas de análise dinâmica desenvolvidas de produzir resultados consistentes com os encontrados na literatura, bem como a capacidade dos algoritmos de otimização implementados de fornecer fronteiras ótimas de Pareto para os problemas propostos. / The proposed work optimized the concentrated parameters of a full-vehicle model’s suspension system, being that model representative of a passenger car which travels at a constant speed on a certain road profile provided by the ISO 8608, 1995, standard, using a multi-objective meta-heuristic optimization algorithm. Two numerical-computational routines were developed, seeking to perform said optimization for both the time and frequency domains. The use of meta-heuristic algorithms has been increasing in mechanical systems optimization, providing speed and accuracy in obtaining an optimal result. Combining an optimization algorithm with a model that satisfactorily represents a mechanical system yields a tool that indicates the system’s maximum efficiency parameters, which can be used in numerous applications. It was intended, with the integration of the dynamic analysis routines to the multi-objective genetic optimization algorithm NSGA-II, developed by Deb et al., 2002, the obtainment of two Pareto-optimal fronts. These fronts consist in the set of non-dominated solutions that minimize the following objective functions: the weighted RMS value of the driver’s seat vertical acceleration, the mean RMS value of the model wheel’s dynamic amplification factor, and the maximum relative displacement between each wheel and the body of the vehicle model. The method proposed by Shinozuka and Jan, 1972, is used to obtain the road irregularity profile in the time domain from the power spectral density (PSD) equations that represent the different pavement classes. The Newmark’s method (1959) is used to solve the differential motion equation in the time domain, in order to obtain the vehicle model’s responses to these irregularities. The dynamic behavior of the vehicle model in the frequency domain was obtained through the use of the frequency response function (FRF) of the analyzed model. The results showed the capacity of both the dynamic analysis routines developed in generating results that are consistent with those found in literature, as well as the capacity of the optimization algorithms implemented in providing Pareto optimal fronts to the proposed problems.
48

Biagrupamento heurístico e coagrupamento baseado em fatoração de matrizes: um estudo em dados textuais / Heuristic biclustering and coclustering based on matrix factorization: a study on textual data

Alexandra Katiuska Ramos Diaz 16 October 2018 (has links)
Biagrupamento e coagrupamento são tarefas de mineração de dados que permitem a extração de informação relevante sobre dados e têm sido aplicadas com sucesso em uma ampla variedade de domínios, incluindo aqueles que envolvem dados textuais -- foco de interesse desta pesquisa. Nas tarefas de biagrupamento e coagrupamento, os critérios de similaridade são aplicados simultaneamente às linhas e às colunas das matrizes de dados, agrupando simultaneamente os objetos e os atributos e possibilitando a criação de bigrupos/cogrupos. Contudo suas definições variam segundo suas naturezas e objetivos, sendo que a tarefa de coagrupamento pode ser vista como uma generalização da tarefa de biagrupamento. Estas tarefas, quando aplicadas nos dados textuais, demandam uma representação em um modelo de espaço vetorial que, comumente, leva à geração de espaços caracterizados pela alta dimensionalidade e esparsidade, afetando o desempenho de muitos dos algoritmos. Este trabalho apresenta uma análise do comportamento do algoritmo para biagrupamento Cheng e Church e do algoritmo para coagrupamento de decomposição de valores em blocos não negativos (\\textit{Non-Negative Block Value Decomposition} - NBVD), aplicado ao contexto de dados textuais. Resultados experimentais quantitativos e qualitativos são apresentados a partir das experimentações destes algoritmos em conjuntos de dados sintéticos criados com diferentes níveis de esparsidade e em um conjunto de dados real. Os resultados são avaliados em termos de medidas próprias de biagrupamento, medidas internas de agrupamento a partir das projeções nas linhas dos bigrupos/cogrupos e em termos de geração de informação. As análises dos resultados esclarecem questões referentes às dificuldades encontradas por estes algoritmos nos ambiente de experimentação, assim como se são capazes de fornecer informações diferenciadas e úteis na área de mineração de texto. De forma geral, as análises realizadas mostraram que o algoritmo NBVD é mais adequado para trabalhar com conjuntos de dados em altas dimensões e com alta esparsidade. O algoritmo de Cheng e Church, embora tenha obtidos resultados bons de acordo com os objetivos do algoritmo, no contexto de dados textuais, propiciou resultados com baixa relevância / Biclustering e coclustering are data mining tasks that allow the extraction of relevant information about data and have been applied successfully in a wide variety of domains, including those involving textual data - the focus of interest of this research. In biclustering and coclustering tasks, similarity criteria are applied simultaneously to the rows and columns of the data matrices, simultaneously grouping the objects and attributes and enabling the discovery of biclusters/coclusters. However their definitions vary according to their natures and objectives, being that the task of coclustering can be seen as a generalization of the task of biclustering. These tasks applied in the textual data demand a representation in a model of vector space, which commonly leads to the generation of spaces characterized by high dimensionality and sparsity and influences the performance of many algorithms. This work provides an analysis of the behavior of the algorithm for biclustering Cheng and Church and the algorithm for coclustering non-negative block decomposition (NBVD) applied to the context of textual data. Quantitative and qualitative experimental results are shown, from experiments on synthetic datasets created with different sparsity levels and on a real data set. The results are evaluated in terms of their biclustering oriented measures, internal clustering measures applied to the projections in the lines of the biclusters/coclusters and in terms of generation of information. The analysis of the results clarifies questions related to the difficulties faced by these algorithms in the experimental environment, as well as if they are able to provide differentiated information useful to the field of text mining. In general, the analyses carried out showed that the NBVD algorithm is better suited to work with datasets in high dimensions and with high sparsity. The algorithm of Cheng and Church, although it obtained good results according to its own objectives, provided results with low relevance in the context of textual data
49

Resource management for data streaming applications

Agarwalla, Bikash Kumar 07 July 2010 (has links)
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
50

Heuristic Scheduling Algorithms For Parallel Heterogeneous Batch Processors

Mathirajan, M 11 1900 (has links)
In the last decade, market pressures for greater variety of products forced a gradual shift from continuous manufacturing to batch manufacturing in various industries. Consequently batch scheduling problems have attracted the attention of researchers in production and operations management. This thesis addresses the scheduling of parallel non-identical batch processors in the presence of dynamic job arrivals, incompatible job-families and non-identical job sizes. This problem abstracts the scheduling of heat-treatment furnace operations of castings in a steel foundry. The problem is of considerable interest in this sector as a large proportion of the total production time is spent in heat treatment processing. This problem is also encountered in other industrial settings such as burn-in operation in the final testing stage of semiconductor manufacturing, and manufacturing of steel, ceramics, aircraft parts, footwear, etc. A detailed literature review and personal communications with experts revealed that this class of batch scheduling problems have not been addressed hitherto. A major concern in the management of foundries is to maximize throughput and reduce flow time and work-in-process inventories. Therefore we have chosen the primary scheduling objective to be the utilization of batch processors and as secondary objectives the minimization of overall flow time and weighted average waiting time per job. This formulation can be considered as an extension of problems studied by DOBSON AND NAMBINADOM (1992), UZSOY (1995), ZEE et a/. (1997) and MEHTA AND UZSOY (1998). Our effort to carefully catalogue the large number of variants of deterministic batch scheduling problems led us to the development of a taxonomy and notation. Not surprisingly, we are able to show that our problem is NP-hard and is therefore in the company of many scheduling problems that are difficult to solve. Initially two heuristic algorithms, one a mathematical programming based heuristic algorithm (MPHA) and the other a greedy heuristic algorithm were developed. Due to the computational overheads in the implementation of MPHA when compared with the greedy heuristic, we chose to focus on the latter as the primary scheduling methodology. Preliminary experimentation led us to the observation that the performance of greedy heuristics depends critically on the selection of job-families. So eight variants of the greedy heuristic that differ mainly in the decision on "job-family selection" were proposed. These eight heuristics are basically two sets {Al, A2, A3, A4} and the modified (MAI, MA2, MA3, MA4}, which differ only on how the "job-family" index, weighted shortest processing time, is computed. For evaluating the performance of the eight heuristics, computational experiments were carried out. The analysis of the experimental data is presented in two perspectives. The goal of the first perspective was to evaluate the absolute quality of the solutions obtained by the proposed heuristic algorithms when compared with estimated optimal solutions. The second perspective was to compare the relative performance of the proposed heuristics. The test problems generated were designed to reflect real-world scheduling problems that we have observed in the steel-casting industry. Three important problem parameters for the test set generation are the number of jobs [n], job-priority [P], and job-family [F]. We considered 5 different levels for n, 2 different levels for P and 2 different levels for F. The test set reflects (i) the size of the jobs vary uniformly (ii) there are two batch processors and (iii) five incompatible job-families with different processing times. 15 problem instances were generated for each level of (n, P, and F). Out of many procedures available in the literature for estimating optimal value for combinatorial optimization problems, we used the procedure based on Weibull distribution as discussed in Rardin and Uzsoy (2001). For each problem instance of the randomly generated 300 problem instances, 15 feasible solutions (i.e., the average utilization of batch processors (AUBP)) were obtained using "random decision rule for first two stages and using a "best-fit heuristic' for the last stage of the scheduling problem. These 15 feasible solutions were used to estimate the optimal value. The generated 15 feasible solutions are expected to provide the estimated optimal value of the problem instance with a very high probability. Both average performance and worst-case performance of the heuristics indicated that, the heuristic algorithms A3 and A4, on the average yielded better utilization than the estimated optimal value. This indicates that the Weibull-based technique may have yielded conservative estimates of the optimal value. Further, the other heuristic algorithms found inferior solutions when compared with the estimated optimal value. But the deviations were very small. From this, we may infer that all the proposed heuristic algorithms are acceptable. The relative evaluation of heuristics was in terms of both computational effort and the quality of the solution. For the heuristics, it was clear that the computational burden is low enough on the average to run all the proposed heuristics on each problem instance and select the best solution. Also, it is observed that any algorithm from the first set of {Al, A2, A3, and A4} takes more computational time than any one from the second set {MAI, MA2, MA3, and MA4}. Regarding solution quality, the following inferences were made: ٭ In general the heuristic algorithms are sensitive to the choice of problem factors with respect to all the scheduling objectives. ٭ The three algorithms A3, MA4 and MAI are observed to be superior with respect to the scheduling objectives: maximizing average utilization of batch processors (AUBP), minimization of overall flow time (OFT) and minimizing weighted average waiting time (WAWT) respectively. Further, the heuristic algorithm MAI turns out to be the best choice if we trade-off all three objectives AUBP, OFT and WAWT. Finally we carried out simple sensitivity analyses experiments in order to understand the influence of some parameters of the scheduling on the performance of the heuristic algorithms. These were related to one at a time changes in (1) job-size distribution, (2) capacities of batch processors and (3) processing time of job-families. From the analyses it appears that there is an influence of changes in these input parameters. The results of the sensitivity analyses can be used to guide the selection of a heuristic for a particular combination of input parameters. For example, if we have to pick a single heuristic algorithm, then MAI is the best choice when considering the performance and the robustness indicated by the sensitivity analysis. In summary, this thesis examined a problem arising in the scheduling of heat-treatment operations in the steel-casting industry. This problem was abstracted to a class of deterministic batch scheduling problems. We analyzed the computational complexity of this problem and showed that it is NP-hard and therefore unlikely to admit a scalable exact method. Eight variants of a fast greedy heuristic were designed to solve the scheduling problem of interest. Extensive computational experiments were carried out to compare the performance of the heuristics with estimated optimal values (using the Weibull technique) and also for relative effectiveness and this showed that the heuristics are capable of consistently obtaining near-estimated) optimal solutions with very low computational burden for the solution of large scale problems. Finally, a comprehensive sensitivity analysis was carried out to study the influence of a few parameters, by changing them one at a time, on the performance of the heuristic algorithms. This type of analysis gives users some confidence in the robustness of the proposed heuristics.

Page generated in 0.4801 seconds