• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Prediction Of Queue Waiting Times For Metascheduling On Parallel Batch Systems

Rajath Kumar, * 08 1900 (has links) (PDF)
Production parallel systems are space-shared and employ batch queues in which the jobs submitted to the systems are made to wait before execution. Thus, jobs submitted to parallel batch systems incur queue waiting times in addition to the execution times. Prediction of these queue waiting times is important to provide overall estimates to the users and can also help meta-schedulers make scheduling decisions. In the first part of our research, we have developed an integrated framework PQStar for identification and prediction of jobs with short queue waiting times. Analyses of the job traces of supercomputers reveal that about 56 to 99% of the jobs incur queue waiting times of less than an hour. Hence, identifying these quick starters or jobs with short queue waiting times is Essential for overall improvement on queue waiting time predictions. An important aspect of our prediction strategy for quick starters is that it considers the processor occupancy state and the queue state at the time of the job submission in addition to the job characteristics including the requested number of processors and the estimated runtime. Our experiments with different Production supercomputer job traces show that our prediction strategies can lead to correct identification of about 20% more quick starters on an average and provide tighter bounds for these jobs, and result in about 24% higher overall prediction accuracy on an average than the next best existing method. We have also developed a framework for predicting ranges of queue waiting times for other classes of jobs by employing multi-class classification on similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k- Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the predicted class (obtained from the kNN), along with its neighboring classes, are used to provide a set of ranges of wait times with probabilities. Our experiments with different production supercomputer job traces show that our prediction strategies can lead to about 8% improved accuracy on an average in prediction of the non-quick starters, compared to the next best existing method. Finally, we have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. For a given target job, we first identify the queues/sites where the job can be a quick starter to get a set of candidate queues/sites for the scheduling of the job. We then compute the expected value of the predicted wait time in each of the candidate queues/sites, and schedule the job to the one with minimum expected value, for the execution of the job. We have performed experiments with different production supercomputer job traces and synthetic traces for various system sizes, partitioning schemes and different workloads. These experiments have shown that our scheduling strategy gives much improved performance when compared to the existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47% on an average.
2

The automatic design of batch processing systems

Dwyer, Barry January 1999 (has links)
Batch processing is a means of improving the efficiency of transaction processing systems. Despite the maturity of this field, there is no rigorous theory that can assist in the design of batch systems. This thesis proposes such a theory, and shows that it is practical to use it to automate system design. This has important consequences; the main impediment to the wider use of batch systems is the high cost of their development and intenance. The theory is developed twice: informally, in a way that can be used by a systems analyst, and formally, as a result of which a computer program has been developed to prove the feasibility of automated design. Two important concepts are identified, which can aid in the decomposition of any system: 'separability', and 'independence'. Separability is the property that allows processes to be joined together by pipelines or similar topologies. Independence is the property that allows elements of a large set to be accessed and updated independently of one another. Traditional batch processing technology exploits independence when it uses sequential access in preference to random access. It is shown how the same property allows parallel access, resulting in speed gains limited only by the number of processors. This is a useful development that should assist in the design of very high throughput transaction processing systems. Systems are specified procedurally by describing an ideal system, which generates output and updates its internal state immediately following each input event. The derived systems have the same external behaviour as the ideal system except that their outputs and internal states lag those of the ideal system arbitrarily. Indeed, their state variables may have different delays, and the systems as whole may never be in consistent state. A 'state dependency graph' is derived from a static analysis of a specification. The reduced graph of its strongly-connected components defines a canonical process network from which all possible implementations of the system can be derived by composition. From these it is possible to choose the one that minimises any imposed cost function. Although, in general, choosing the optimum design proves to be an NP-complete problem, it is shown that heuristics can find it quickly in practical cases. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 1999.
3

The automatic design of batch processing systems

Dwyer, Barry January 1999 (has links)
Batch processing is a means of improving the efficiency of transaction processing systems. Despite the maturity of this field, there is no rigorous theory that can assist in the design of batch systems. This thesis proposes such a theory, and shows that it is practical to use it to automate system design. This has important consequences; the main impediment to the wider use of batch systems is the high cost of their development and intenance. The theory is developed twice: informally, in a way that can be used by a systems analyst, and formally, as a result of which a computer program has been developed to prove the feasibility of automated design. Two important concepts are identified, which can aid in the decomposition of any system: 'separability', and 'independence'. Separability is the property that allows processes to be joined together by pipelines or similar topologies. Independence is the property that allows elements of a large set to be accessed and updated independently of one another. Traditional batch processing technology exploits independence when it uses sequential access in preference to random access. It is shown how the same property allows parallel access, resulting in speed gains limited only by the number of processors. This is a useful development that should assist in the design of very high throughput transaction processing systems. Systems are specified procedurally by describing an ideal system, which generates output and updates its internal state immediately following each input event. The derived systems have the same external behaviour as the ideal system except that their outputs and internal states lag those of the ideal system arbitrarily. Indeed, their state variables may have different delays, and the systems as whole may never be in consistent state. A 'state dependency graph' is derived from a static analysis of a specification. The reduced graph of its strongly-connected components defines a canonical process network from which all possible implementations of the system can be derived by composition. From these it is possible to choose the one that minimises any imposed cost function. Although, in general, choosing the optimum design proves to be an NP-complete problem, it is shown that heuristics can find it quickly in practical cases. / Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 1999.
4

Study of Hydrocarbon Waste Biodegradation and the Role of Biosurfactants in the Process

Fallon, Agata M. 18 September 1998 (has links)
Two types of oily waste sludges generated by a railroad maintenance facility were studied to reduce the volume of hydrocarbon waste. The specific goals of this laboratory study were to evaluate rate and extent of microbial degradation, benefits of organism addition, role of biosurfactant, and dewatering properties. The oily waste sludges differed in characteristics and contained a mixture of water, motor oil, lubricating oil, and other petroleum products. Degradation was measured using COD, suspended solids, GC measurements of extractable material, and nonextractable material concentration. Biosurfactant production was characterized using surface tension and polysaccharide measurements. Degradation of ten percent waste oil showed that the removal in a 91 day experiment was 75 percent for COD and suspended solids, 98 percent for extractable oil, and negligible for non-extractable material. It was concluded that methylene chloride extraction could be used to estimate degradation potential of a hydrocarbon waste. Addition of organisms increased the rate and extent of degradation over 22 days, but did not provide any benefits over 91 days. Data suggested that microorganisms degraded simple compounds first, then produced biosurfactants. It was thought that the biosurfactants remained attached to the organism membrane and increased solubility, stimulating the degradation of difficult to degrade waste oil. After oil was degraded the biosurfactants became ineffective. The dewatering properties of 10 percent oily sludge deteriorated with the production of biosurfactant and improved after the surfactant was degraded due to changes in oil solubility. / Master of Science

Page generated in 0.0519 seconds