1 |
A CPU-GPU Hybrid Approach for Accelerating Cross-correlation Based Strain ElastographyDeka, Sthiti 2010 May 1900 (has links)
Elastography is a non-invasive imaging modality that uses ultrasound to estimate the elasticity of soft tissues. The resulting images are called 'elastograms'. Elastography techniques are promising as cost-effective tools in the early detection of pathological changes in soft tissues. The quality of elastographic images depends on the accuracy of the local displacement estimates. Cross-correlation based displacement estimators are precise and sensitive. However cross-correlation based techniques are computationally intense and may limit the use of elastography as a real-time diagnostic tool. This study investigates the use of parallel general purpose graphics processing unit (GPGPU) engines for speeding up generation of elastograms at real-time frame rates while preserving elastographic image quality. To achieve this goal, a cross-correlation based time-delay estimation algorithm was developed in C programming language and was profiled to locate performance blocks. The hotspots were addressed by employing software pipelining, read-ahead and eliminating redundant computations. The algorithm was then analyzed for parallelization on GPGPU and the stages that would map well to the GPGPU hardware were identified. By employing optimization principles for efficient memory access and efficient execution, a net improvement of 67x with respect to the original optimized C version of the estimator was achieved. For typical diagnostic depths of 3-4cm and elastographic processing parameters, this implementation can yield elastographic frame rates in the order of 50fps. It was also observed that all of the stages in elastography cannot be offloaded to the GPGPU for computation because some stages have sub-optimal memory access patterns. Additionally, data transfer from graphics card memory to system memory can be efficiently overlapped with concurrent CPU execution. Therefore a hybrid model of computation where computational load is optimally distributed between CPU and GPGPU was identified as an optimal approach to adequately tackle the speed-quality problem in real-time imaging. The results of this research suggest that use of GPGPU as a co-processor to CPU may allow generation of elastograms at real time frame rates without significant compromise in image quality, a scenario that could be very favorable in real-time clinical elastography.
|
2 |
An Expanded Speedup Model for the Early Phases of High Performance Computing Cluster (HPCC) DesignGabriel, Matthew Frederick 15 May 2013 (has links)
The size and complexity of many scientific and enterprise-level applications require a high degree of parallelization in order to produce outputs within an acceptable period of time. This often necessitates the uses of high performance computing clusters (HPCCs) and parallelized applications which are carefully designed and optimized. A myriad of papers study the various factors which influence performance and then attempt to quantify the maximum theoretical speedup that can be achieved by a cluster relative to a sequential processor.
The studies tend to only investigate the influences in isolation, but in practice these factors tend to be interdependent. It is the interaction rather than any solitary influence which normally creates the bounds of the design trade space. In the attempt to address this disconnect, this thesis blends the studies into an expanded speedup model which captures the interplay. The model is intended to help the cluster engineer make initial estimates during the early phases of design while the system is not mature enough for refinement using timing studies.
The model pulls together factors such as problem scaling, resource allocation, critical sections, and the problem's inherent parallelizability. The derivation was examined theoretically and then validated by timing studies on a physical HPCC. The validation studies found that the model was an adequate generic first approximation. However, it was also found that customizations may be needed in order to account for application-specific influences such as bandwidth limitations and communication delays which are not readily incorporated into a generic model. / Master of Science
|
3 |
Fast Gaussian Evaluations in Large Vocabulary Continous Speech RecognitionSrivastava, Shivali 13 December 2002 (has links)
Rapid advances in speech recognition theory, as well as computing hardware, have led to the development of machines that can take human speech as input, decode the information content of the speech, and respond accordingly. Real-time performance of such systems is often dominated by the evaluation of likelihoods in the statistical modeling component of the system. Statistical models are typically implemented using Gaussian mixture distributions. The primary objective of this thesis was to develop an extension of the Bucket Box Intersection algorithm in which the dimension with the optimal number of splits can be selected when multiple minima are present. The effects of normalization of mixture weights and Gaussian clipping have also been investigated. We show that the Extended BBI algorithm (EBBI) reduces run-time by 21% without introducing any approximation error. EBBI also produced a 12% lower word error rate than Gaussian clipping for the same computational complexity. These approaches were evaluated on a wide variety of tasks including conversational speech.
|
4 |
Quantum Speed-ups for Boolean Satisfiability and Derivative-Free OptimizationArunachalam, Srinivasan January 2014 (has links)
In this thesis, we have considered two important problems, Boolean satisfiability (SAT) and derivative free optimization in the context of large scale quantum computers. In the first part, we survey well known classical techniques for solving satisfiability. We compute the approximate time it would take to solve SAT instances using quantum techniques and compare it with state-of-the heart classical heuristics employed annually in SAT competitions. In the second part of the thesis, we consider a few classically well known algorithms for derivative free optimization which are
ubiquitously employed in engineering problems. We propose a quantum speedup to this classical algorithm by using techniques of the quantum minimum finding algorithm. In the third part of the thesis, we consider practical applications in the fields of bio-informatics, petroleum refineries and civil engineering which involve solving either satisfiability or derivative free optimization. We investigate if using known quantum techniques to speedup these algorithms directly translate to
the benefit of industries which invest in technology to solve these problems. In the last section, we propose a few open problems which we feel are immediate hurdles, either from an algorithmic or architecture perspective to getting a convincing speedup for the practical problems considered.
|
5 |
ACCTuner: OpenACC Auto-Tuner For Accelerated Scientific ApplicationsAlzayer, Fatemah 17 May 2015 (has links)
We optimize parameters in OpenACC clauses for a stencil evaluation kernel executed on Graphical Processing Units (GPUs) using a variety of machine learning and optimization search algorithms, individually and in hybrid combinations, and compare execution time performance to the best possible obtained from brute force search. Several auto-tuning techniques – historic learning, random walk, simulated annealing, Nelder-Mead, and genetic algorithms – are evaluated over a large two-dimensional parameter space not satisfactorily addressed to date by OpenACC compilers, consisting of gang size and vector length. A hybrid of historic learning and Nelder-Mead delivers the best balance of high performance and low tuning effort.
GPUs are employed over an increasing range of applications due to the performance available from their large number of cores, as well as their energy efficiency. However, writing code that takes advantage of their massive fine-grained parallelism requires deep knowledge of the hardware, and is generally a complex task involving program transformation and the selection of many parameters. To improve programmer productivity, the directive-based programming model OpenACC was announced as an industry standard in 2011. Various compilers have been developed to support this model, the most notable being those by Cray, CAPS, and PGI. While the architecture and number of cores have evolved rapidly, the compilers have failed to keep
up at configuring the parallel program to run most e ciently on the hardware. Following successful approaches to obtain high performance in kernels for cache-based processors using auto-tuning, we approach this compiler-hardware gap in GPUs by employing auto-tuning for the key parameters “gang” and “vector” in OpenACC clauses. We demonstrate results for a stencil evaluation kernel typical of seismic imaging over a variety of realistically sized three-dimensional grid configurations, with different truncation error orders in the spatial dimensions. Apart from random walk and historic learning based on nearest neighbor in grid size, most of our heuristics, including the one that proves best, appear to be applied in this context for the first time. This work is a stepping-stone towards an OpenACC auto-tuning framework for
more general high-performance numerical kernels optimized for GPU computations.
|
6 |
Construção automática de mosaicos de imagens digitais aéreas agrícolas utilizando transformada SIFT e processamento paralelo / Automatic construction of mosaics from aerial digital images agricultural using SIFT transform and parallel processingTarallo, André de Souza 26 August 2013 (has links)
A construção automática de grandes mosaicos a partir de imagens digitais de alta resolução é uma área de grande importância, encontrando aplicações em diferentes áreas, como agricultura, meteorologia, sensoriamento remoto e biológicas. Na agricultura, a eficiência no processo de tomada de decisão para controle de pragas, doenças ou queimadas está relacionada com a obtenção rápida de informações. Até o presente momento este controle vem sendo feito de maneira semiautomática, necessitando obter o modelo digital do terreno, fazer a ortorretificação de imagens, inserir marcações manualmente no solo, para que um software possa construir um mosaico de maneira convencional. Para automatizar este processo, o presente projeto propõe três metodologias (1, 2, 3) baseadas em algoritmos já consolidados na literatura (SIFT, BBF e RANSAC) e processamento paralelo (OpenMP), utilizando imagens aéreas agrícolas de alta resolução, de pastagens e plantações de diversas culturas. As metodologias diferem na maneira como os cálculos são realizados para a construção dos mosaicos. Construir mosaicos com este padrão de imagem não é uma tarefa trivial, pois requer alto esforço computacional para processamento destas imagens. As metodologias incluem um pré-processamento das imagens para minimizar possíveis distorções que surgem no processo de aquisição de imagens e contém também algoritmos para suavização das emendas das imagens no mosaico. A base de imagens, denominada base de imagens sem redimensionamento, foi construída a partir de imagens com dimensão de 2336 x 3504 pixels (100 imagens divididas em 10 grupos de 10 imagens), obtidas na região de Santa Rita do Sapucaí - MG, as quais foram utilizadas para validar a metodologia. Outra base de imagens, referida como base de imagens redimensionada, contêm 200 imagens de 533 x 800 pixels (10 grupos de 20 imagens) e foi utilizada para avaliação de distorção para comparação com os softwares livres Autostitch e PTGui, os quais possuem parâmetros pré-definidos para a resolução de 533 x 800 pixels. Os resultados do tempo de processamento sequencial para as três metodologias evidenciaram a metodologia 3 com o menor tempo, sendo esta 63,5% mais rápida que a metodologia 1 e 44,5% do que a metodologia 2. O processamento paralelo foi avaliado para um computador com 2, 4 e 8 threads (4 núcleos físicos e 4 núcleos virtuais), reduzindo em 60% o tempo para a construção dos mosaicos de imagens para a metodologia 1. Verificou-se que um computador com 4 threads (núcleos físicos) é o mais adequado em tempo de execução e Speedup, uma vez que quando se utilizam 8 threads são incluídos as threads virtuais. Os resultados dos testes de distorção obtidos evidenciam que os mosaicos gerados com a metodologia 1 apresentam menores distorções para 7 grupos de imagens em um total de 10. Foram também avaliadas as distorções nas junções de cinco mosaicos constituídos apenas por pares de imagens utilizando a metodologia 3, evidenciando que a metodologia 3 apresenta menor distorção para 4 mosaicos, em um total de 5. O presente trabalho contribui com a metodologia 2 e 3, com a minimização das distorções das junções do mosaico, com o paralelismo em OpenMP e com a avaliação de paralelismo com MPI. / The automatic construction of large mosaics from high resolution digital images is an area of great importance, which finds applications in different areas, especially agriculture, meteorology, biology and remote sensing. In agriculture, the efficiency of decision making is linked to obtaining faster and more accurate information, especially in the control of pests, diseases or fire control. So far this has been done semiautomatically and it is necessary to obtain the digital terrain model, do the orthorectification of images, insert markings on the ground by manual labor, so that software can build a mosaic in the conventional way. To automate this process, this project proposes three methodologies (1, 2, 3) based on algorithms already well-established in the literature (SIFT, BBF e RANSAC) and parallel processing (OpenMP), using high resolution/size aerial images agricultural of pasture and diverse cultures. The methodologies differ in how the calculations are performed for the construction of mosaics. Build mosaics with this kind of picture isn´t a trivial task, as it requires high computational effort for processing these images. The methodologies include a pre-processing of images to minimize possible distortions that arise in the process of image acquisition and also contain algorithms for smoothing the seams of the images in the mosaic. The image database, called image database without scaling, was constructed from images with dimensions of 2336 x 3504 pixels (100 images divided into 10 groups of 10 pictures), obtained in the region of Santa Rita do Sapucaí - MG, which were used to validate the methodology. Another image database, referred to as base images resize, contains 200 images of 533 x 800 pixels (10 groups of 20 pictures). It was used for evaluation of distortion compared to the free softwares Autostitch and PTGui, which have pre-defined parameters for the resolution of 533 x 800 pixels. The results of sequential processing time for the three methodologies showed the methodology 3 with the shortest time, which is 63.5% faster than the methodology 1 and 44.5% faster than the methodology 2. Parallel processing was evaluated for a computer with 2, 4 and 8 threads (4 physical cores and 4 virtual cores), reducing by 60% the time to build the mosaics of images for the methodology 1. It was found that a computer with 4 threads (physical cores) is most appropriate in execution time and Speedup, since when using 8 threads, threads virtual are included. The test results of distortion show that the mosaics generated with the methodology 1 have lower distortion for 7 groups of images in a total of 10. Distortions at the junctions of five mosaics consisting only of pairs of images were also evaluate utilizing the methodology 3, indicating that the methodology 3 has less distortion for 4 mosaics, for a total of 5. Contributions of this work have been the methodologies 2 and 3, with the distortions minimization of the mosaic junction, the parallelism in OpenMP and the assessment of parallelism with MPI.
|
7 |
Construção automática de mosaicos de imagens digitais aéreas agrícolas utilizando transformada SIFT e processamento paralelo / Automatic construction of mosaics from aerial digital images agricultural using SIFT transform and parallel processingAndré de Souza Tarallo 26 August 2013 (has links)
A construção automática de grandes mosaicos a partir de imagens digitais de alta resolução é uma área de grande importância, encontrando aplicações em diferentes áreas, como agricultura, meteorologia, sensoriamento remoto e biológicas. Na agricultura, a eficiência no processo de tomada de decisão para controle de pragas, doenças ou queimadas está relacionada com a obtenção rápida de informações. Até o presente momento este controle vem sendo feito de maneira semiautomática, necessitando obter o modelo digital do terreno, fazer a ortorretificação de imagens, inserir marcações manualmente no solo, para que um software possa construir um mosaico de maneira convencional. Para automatizar este processo, o presente projeto propõe três metodologias (1, 2, 3) baseadas em algoritmos já consolidados na literatura (SIFT, BBF e RANSAC) e processamento paralelo (OpenMP), utilizando imagens aéreas agrícolas de alta resolução, de pastagens e plantações de diversas culturas. As metodologias diferem na maneira como os cálculos são realizados para a construção dos mosaicos. Construir mosaicos com este padrão de imagem não é uma tarefa trivial, pois requer alto esforço computacional para processamento destas imagens. As metodologias incluem um pré-processamento das imagens para minimizar possíveis distorções que surgem no processo de aquisição de imagens e contém também algoritmos para suavização das emendas das imagens no mosaico. A base de imagens, denominada base de imagens sem redimensionamento, foi construída a partir de imagens com dimensão de 2336 x 3504 pixels (100 imagens divididas em 10 grupos de 10 imagens), obtidas na região de Santa Rita do Sapucaí - MG, as quais foram utilizadas para validar a metodologia. Outra base de imagens, referida como base de imagens redimensionada, contêm 200 imagens de 533 x 800 pixels (10 grupos de 20 imagens) e foi utilizada para avaliação de distorção para comparação com os softwares livres Autostitch e PTGui, os quais possuem parâmetros pré-definidos para a resolução de 533 x 800 pixels. Os resultados do tempo de processamento sequencial para as três metodologias evidenciaram a metodologia 3 com o menor tempo, sendo esta 63,5% mais rápida que a metodologia 1 e 44,5% do que a metodologia 2. O processamento paralelo foi avaliado para um computador com 2, 4 e 8 threads (4 núcleos físicos e 4 núcleos virtuais), reduzindo em 60% o tempo para a construção dos mosaicos de imagens para a metodologia 1. Verificou-se que um computador com 4 threads (núcleos físicos) é o mais adequado em tempo de execução e Speedup, uma vez que quando se utilizam 8 threads são incluídos as threads virtuais. Os resultados dos testes de distorção obtidos evidenciam que os mosaicos gerados com a metodologia 1 apresentam menores distorções para 7 grupos de imagens em um total de 10. Foram também avaliadas as distorções nas junções de cinco mosaicos constituídos apenas por pares de imagens utilizando a metodologia 3, evidenciando que a metodologia 3 apresenta menor distorção para 4 mosaicos, em um total de 5. O presente trabalho contribui com a metodologia 2 e 3, com a minimização das distorções das junções do mosaico, com o paralelismo em OpenMP e com a avaliação de paralelismo com MPI. / The automatic construction of large mosaics from high resolution digital images is an area of great importance, which finds applications in different areas, especially agriculture, meteorology, biology and remote sensing. In agriculture, the efficiency of decision making is linked to obtaining faster and more accurate information, especially in the control of pests, diseases or fire control. So far this has been done semiautomatically and it is necessary to obtain the digital terrain model, do the orthorectification of images, insert markings on the ground by manual labor, so that software can build a mosaic in the conventional way. To automate this process, this project proposes three methodologies (1, 2, 3) based on algorithms already well-established in the literature (SIFT, BBF e RANSAC) and parallel processing (OpenMP), using high resolution/size aerial images agricultural of pasture and diverse cultures. The methodologies differ in how the calculations are performed for the construction of mosaics. Build mosaics with this kind of picture isn´t a trivial task, as it requires high computational effort for processing these images. The methodologies include a pre-processing of images to minimize possible distortions that arise in the process of image acquisition and also contain algorithms for smoothing the seams of the images in the mosaic. The image database, called image database without scaling, was constructed from images with dimensions of 2336 x 3504 pixels (100 images divided into 10 groups of 10 pictures), obtained in the region of Santa Rita do Sapucaí - MG, which were used to validate the methodology. Another image database, referred to as base images resize, contains 200 images of 533 x 800 pixels (10 groups of 20 pictures). It was used for evaluation of distortion compared to the free softwares Autostitch and PTGui, which have pre-defined parameters for the resolution of 533 x 800 pixels. The results of sequential processing time for the three methodologies showed the methodology 3 with the shortest time, which is 63.5% faster than the methodology 1 and 44.5% faster than the methodology 2. Parallel processing was evaluated for a computer with 2, 4 and 8 threads (4 physical cores and 4 virtual cores), reducing by 60% the time to build the mosaics of images for the methodology 1. It was found that a computer with 4 threads (physical cores) is most appropriate in execution time and Speedup, since when using 8 threads, threads virtual are included. The test results of distortion show that the mosaics generated with the methodology 1 have lower distortion for 7 groups of images in a total of 10. Distortions at the junctions of five mosaics consisting only of pairs of images were also evaluate utilizing the methodology 3, indicating that the methodology 3 has less distortion for 4 mosaics, for a total of 5. Contributions of this work have been the methodologies 2 and 3, with the distortions minimization of the mosaic junction, the parallelism in OpenMP and the assessment of parallelism with MPI.
|
8 |
Modelling the hydrology of the Greenland ice sheetKaratay, Mehmet Rahmi January 2011 (has links)
This thesis aims to better understand the relationships between basal water pressure, friction, and sliding mechanisms at ice sheet scales. In particular, it develops a new subglacial hydrology model (Hydro) to explicitly predict water pressures in response to basal water production and water injection from the surface. Recent research suggests that the Greenland ice sheet (gis) is losing a substantial volume of ice through dynamic thinning. This process must be modelled to accurately assess the contribution of the gis to sea-level rise in future warming scenarios. A key control on dynamic thinning is the presence of water at the ice-bed interface; Zwally et al. (2002) highlight the importance of supraglacial lakes' impact on basal ice dynamics, a process now con rmed by Das et al. (2008) and Shepherd et al. (2009). Many studies focus on the effects of surface meltwater reaching the bed of the gis but the underlying processes are often ignored. Geothermal, strain, and frictional melting, which evolves with basal hydrology, provide the background basal pressure profile that surface meltwater perturbates. Without understanding how these heat terms affect the background profile it is difficult to define basal boundary conditions in models and therefore difficult to model the dynamic response of the gis to surface melting. Hydro tracks subglacial water pressures and the evolution of efficient drainage networks. Coupled with the existing 3D thermomechanical ice sheet model Glimmer, model outputs include effective pressure N and the efficient hydraulic area. Defining frictional heat flux and basal traction as functions of N allow the modelling of seasonal dynamic response to randomly draining supraglacial lakes. Key results are that frictional heat flux, as a function of N, caps potential runaway feedback mechanisms and that water converges in topographic troughs under Greenland's outlet glaciers. This leads to a background profile with low N under outlet glaciers. Therefore, outlet glaciers show a muted dynamic speedup to the seasonal surface signal reaching the bed. Land-terminating ice does not tend to have subglacial troughs and so has higher background N and consequently a larger seasonal response. This, coupled with effects of ice rheology, can explain the hitherto puzzling lack of observed seasonal velocity change on Jakobshavn Isbræ and other outlet glaciers.
|
9 |
Leakage power driven behavioral synthesis of pipelined asicsGopalan, Ranganath 01 June 2005 (has links)
Traditional approaches for power optimization during high level synthesis, have targetted single-cycle designs where only one input is being processed by the datapath at any given time. Throughput of large single-cycle designs can be improved by means of pipelining. In this work, we present a framework for the high-level synthesis of pipelined datapaths with low leakage power dissipation. We explore the effect of pipelining on the leakage power dissipation of data-flow intensive designs. An algorithm for minimization of leakage power during behavioral pipelining is presented. The transistor level leakage reduction technique employed here is based on Multi-Threshold CMOS (MTCMOS) technology. Pipelined allocation of functional units and registers is performed considering fixed data introduction intervals. Our algorithm uses simulated annealing to perform scheduling, allocation, and binding for obtaining pipelined datapaths that have low leakage dissipation.
|
10 |
Προβλήματα επιτάχυνσης διεργασιών σε grid computing: αλγόριθμοι και πολυπλοκότηταΣτούμπου, Αμαλία 10 October 2008 (has links)
Η παρούσα εργασία έχει σαν στόχο την ανάλυση ενός προβλήματος
δρομολόγησης το οποίο στη βάση του έχει ως εξής: Δίνεται ακολουθία
διεργασιών που πρόκειται να δοθεί για επεξεργασία σε ένα σύνολο
μηχανών. Η κάθε διεργασία χαρακτηρίζεται από το χρόνο επεξεργασίας
της και θα πρέπει να δρομολογηθεί σε κάποια απ' τις μηχανές για
χρόνο τουλάχιστον ίσο με αυτό. Επιπλέον υπάρχει απαίτηση από ένα
υποσύνολο διεργασιών για επιτάχυνση. Το ζητούμενο είναι να δοθεί
αλγόριθμος που δρομολογεί τις διεργασίες στις μηχανές
ελαχιστοποιώντας κάποια μετρική απόδοσης, παράλληλα με την
εξυπηρέτηση όσο το δυνατόν περισσότερων αιτήσεων για επιτάχυνση.
Στα πλαίσια ενός εισαγωγικού κεφαλαίου δίνεται θεωρητικό υπόβαθρο
που αφορά σε προβλήματα και αλγορίθμους δρομολόγησης, σημειώνοντας
ιδιαίτερα τη διαφορά μεταξύ στατικών και δυναμικών αλγορίθμων.
Αντικείμενο της εργασίας αυτής γίνεται στη συνέχεια η μελέτη του
παραπάνω προβλήματος, σε περιβάλλον μιας μηχανής και σε παραλλαγές
του οι οποίες σχετίζονται με παραμέτρους, όπως για παράδειγμα,
προθεσμίες ολοκλήρωσης. Αποτέλεσμα της μελέτης αυτής είναι η
ανάπτυξη, αλλά και η αξιολόγηση αποτελεσματικών μεθόδων επίλυσης,
χρησιμοποιώντας γνωστά κριτήρια βελτιστοποίησης όπως ο χρόνος που
απαιτείται για την ολοκλήρωση των διεργασιών, αλλά και κάποιες νέες
μετρικές που συστήνονται και η ανάγκη τους επεξηγείται αναλυτικά.
Τέλος στο τρίτο κεφάλαιο γίνεται επισκόπηση προβλημάτων που αφορούν
δρομολόγηση σε περισσότερες από μία μηχανές. Τα προβλήματα αυτά ενώ
αποδεικνύονται ΝΡ-πλήρη, οι αποδείξεις παραλείπονται και δίδονται
παρατηρήσεις για την πολυπλοκότητα παραλλαγών τους. Η εργασία
κλείνει με μια παρουσίαση της υπολογιστικής μεθόδου του δυναμικού
προγραμματισμού, που γίνεται προσπάθεια να εφαρμοστεί σε προβλήματα
δρομολόγησης. / The purpose of the present study is to analyze a scheduling problem, the def-
inition of which is: We are given a sequence of tasks that are to be processed
on a set of machines. Each task is characterized by its running time and
has to be scheduled on a machine, for at least its running time. In addition,
there are speedup requests from a subset of tasks. The scheduling algorithm
is asked to produce a schedule that minimizes an objective function in par-
allel with serving as many as possible speedup requests.
The introduction gives a theoretical background concerning scheduling prob-
lems and algorithms, with an emphasis on the di_erence between static and
dynamic algorithms. The objective of the second chapter, is to study the
problem above, in its many variations, with a reference to parameters like
the number of the machines, deadlines etc. The result of this study, is the
development and the evaluation of two algorithms, using objective functions
like makespan, and also some new ones that arise in the essay and their need
is analyzed. The thesis closes with a consideration of already known schedul-
ing problems and its variants, that have been proved to be NP-complete.
|
Page generated in 0.0232 seconds