• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 11
  • 10
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cluster load balancing using process migration

Nuttall, Mark Patrick January 1997 (has links)
No description available.
2

Workload characterization, controller design and performance evaluation for cloud capacity autoscaling

Ali-Eldin Hassan, Ahmed January 2015 (has links)
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
3

Enhancing the Accuracy of Synthetic File System Benchmarks

Farhat, Salam 01 January 2017 (has links)
File system benchmarking plays an essential part in assessing the file system’s performance. It is especially difficult to measure and study the file system’s performance as it deals with several layers of hardware and software. Furthermore, different systems have different workload characteristics so while a file system may be optimized based on one given workload it might not perform optimally based on other types of workloads. Thus, it is imperative that the file system under study be examined with a workload equivalent to its production workload to ensure that it is optimized according to its usage. The most widely used benchmarking method is synthetic benchmarking due to its ease of use and flexibility. The flexibility of synthetic benchmarks allows system designers to produce a variety of different workloads that will provide insight on how the file system will perform under slightly different conditions. The downside of synthetic workloads is that they produce generic workloads that do not have the same characteristics as production workloads. For instance, synthetic benchmarks do not take into consideration the effects of the cache that can greatly impact the performance of the underlying file system. In addition, they do not model the variation in a given workload. This can lead to file systems not optimally designed for their usage. This work enhanced synthetic workload generation methods by taking into consideration how the file system operations are satisfied by the lower level function calls. In addition, this work modeled the variations of the workload’s footprint when present. The first step in the methodology was to run a given workload and trace it by a tool called tracefs. The collected traces contained data on the file system operations and the lower level function calls that satisfied these operations. Then the trace was divided into chunks sufficiently small enough to consider the workload characteristics of that chunk to be uniform. Then the configuration file that modeled each chunk was generated and supplied to a synthetic workload generator tool that was created by this work called FileRunner. The workload definition for each chunk allowed FileRunner to generate a synthetic workload that produced the same workload footprint as the corresponding segment in the original workload. In other words, the synthetic workload would exercise the lower level function calls in the same way as the original workload. Furthermore, FileRunner generated a synthetic workload for each specified segment in the order that they appeared in the trace that would result in a in a final workload mimicking the variation present in the original workload. The results indicated that the methodology can create a workload with a throughput within 10% difference and with operation latencies, with the exception of the create latencies, to be within the allowable 10% difference and in some cases within the 15% maximum allowable difference. The work was able to accurately model the I/O footprint. In some cases the difference was negligible and in the worst case it was at 2.49% difference.
4

”Mycket stressigt, många barn, många krav” : En intervjustudie om hur förskolepedagoger upplever sin arbetssituation / ”Very stressful, many children, many requirements” : An interview study of how preschool teachers perceive their work situation

Dahlström, Annika, Ahlqvist Nordlöf, Marie January 2013 (has links)
Huvudsyftet med denna studie var att undersöka hur förskolepedagogens arbetssituation har förändrats under dennes yrkesverksamma år. Vi ville även ta reda på vilka faktorer som påverkat stressen samt vilka konsekvenser det medfört. Vi har använt oss av en kvalitativ metod för att få veta hur pedagogen ser på sin arbetssituation. Vi intervjuade tio pedagoger med ett fåtal enkla och tydliga frågor. På så vis fick vi personliga och utförliga svar från våra informanter. Vår teoretiska grund baseras på forskning inom stress och samhällsförändring. Resultatet av vår studie visar att alla pedagoger var nöjda med sitt yrkesval men inte med sin arbetssituation. Samtliga ansåg att de påverkats av den ökade stressen och att de främsta stressfaktorerna var barngruppsstorleken, de nya kraven och förändrade arbetsuppgifterna. / The main objective of this study is to investigate how the preschool teachers work situation has changed during their working years. We also wanted to see what the elements are that influenced the stress that they felt and what the consequences are. We used a qualitative interview method to learn what the preschool teacher think of their work situation. We interviewed ten teachers and by using only a few plain questions we got detailed and personal answers from our informers. The theoretical base in our research is about stress and the social change. The result shows that the preschool teachers all experienced that their work situation had changed during their working years. The study shows that all the interviewed teachers felt that the intolerable burden in their work environment was increased. They felt that the prime reason of their stress was the group size of children, the new demands and the for them altered job assignments
5

Accurate Hardware RAID Simulator

Weng, Darrin Kalung 01 June 2013 (has links)
Computer data storage is growing at an astonishing rate. With cloud computing and the growth of the Internet enterprise storage has been predicted to grow at rates as high as 300\% per year. To fulfill this need technologies such as Redundant Array of Independent Disks or RAID are being used in industry today. Not only does RAID increase I/O performance but also provides redundancy measures to protect against hardware failure. Even though RAID has existed for some time now and is well understood, proprietary optimizations such as command scheduling and cache strategies that are employed by current RAID controllers are not well known. This thesis presents a model for RAID 5 that incorporates these features and describes the overall function of hardware RAID controllers. Also a python implementation of this model, Accurate Hardware RAID Simulator (AHRS) is presented and validated against a current hardware RAID controller. It is shown that AHRS can reproduce the behavior of a hardware RAID system with an accuracy of 97.92\% on average compared to a LSI hardware RAID controller.
6

Radiographer reporting in the UK: Is the current scope of practice limiting plain film reporting capacity?

Milner, R.C., Culpan, Gary, Snaith, Beverly 02 August 2016 (has links)
Yes / Objective: To update knowledge on individual radiographer contribution to plain-film reporting workloads; to assess whether there is scope to further increase radiographer reporting capacity within this area. Methods: Reporting radiographers were invited to complete an online survey. Invitations were posted to every acute National Health Service trust in the UK whilst snowball sampling was employed via a network of colleagues, ex-colleagues and acquaintances. Information was sought regarding the demographics, geographical location and anatomical and referral scope of practice. Results: A total of 259 responses were received. 15.1% and 7.7% of respondents are qualified to report chest and abdomen radiographs, respectively. The mean time spent reporting per week is 14.5 h (range 1–37.5). 23.6% of radiographers report only referrals from emergency departments whilst 50.6% of radiographers have limitations on their practice. Conclusion: The scope of practice of reporting radiographers has increased since previous studies; however, radiographer reporting of chest and abdomen radiographs has failed to progress in line with demand. There remain opportunities to increase radiographer capacity to assist the management of reporting backlogs. Advances in knowledge: This study is the first to examine demographic factors of reporting radiographers across the UK and is one of the largest in-depth studies of UK reporting radiographers, at individual level, to date.
7

Performance Analysis and Evaluation of Divisible Load Theory and Dynamic Loop Scheduling Algorithms in Parallel and Distributed Environments

Balasubramaniam, Mahadevan 14 August 2015 (has links)
High performance parallel and distributed computing systems are used to solve large, complex, and data parallel scientific applications that require enormous computational power. Data parallel workloads which require performing similar operations on different data objects, are present in a large number of scientific applications, such as N-body simulations and Monte Carlo simulations, and are expressed in the form of loops. Data parallel workloads that lack precedence constraints are called arbitrarily divisible workloads, and are amenable to easy parallelization. Load imbalance that arise from various sources such as application, algorithmic, and systemic characteristics during the execution of scientific applications degrades performance. Scheduling of arbitrarily divisible workloads to address load imbalance in order to obtain better utilization of computing resources is a major area of research. Divisible load theory (DLT) and dynamic loop scheduling (DLS) algorithms are two algorithmic approaches employed in the scheduling of arbitrarily divisible workloads. Despite sharing the same goal of achieving load balancing, the two approaches are fundamentally different. Divisible load theory algorithms are linear, deterministic and platform dependent, whereas dynamic loop scheduling algorithms are probabilistic and platform agnostic. Divisible load theory algorithms have been traditionally used for performance prediction in environments characterized by known or expected variation in the system characteristics at runtime. Dynamic loop scheduling algorithms are designed to simultaneously address all the sources of load imbalance that stochastically arise at runtime from application, algorithmic, and systemic characteristics. In this dissertation, an analysis and performance evaluation of DLT and DLS algorithms are presented in the form of a scalability study and a robustness investigation. The effect of network topology on their performance is studied. A hybrid scheduling approach is also proposed that integrates DLT and DLS algorithms. The hybrid approach combines the strength of DLT and DLS algorithms and improves the performance of the scientific applications running in large scale parallel and distributed computing environments, and delivers performance superior to that which can be obtained by applying DLT algorithms in isolation. The range of conditions for which the hybrid approach is useful is also identified and discussed.
8

Metrics, Models and Methodologies for Energy-Proportional Computing

Subramaniam, Balaji 21 August 2015 (has links)
Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Exacerbating such costs, data centers are often over-provisioned to avoid costly outages associated with the potential overloading of electrical circuitry. However, such over provisioning is often unnecessary since a data center rarely operates at its maximum capacity. It is imperative that we realize effective strategies to control the power consumption of the server and improve the energy efficiency of data centers. Adding to the problem is the inability of the servers to exhibit energy proportionality which diminishes the overall energy efficiency of the data center. Therefore in this dissertation, we investigate whether it is possible to achieve energy proportionality at the server- and cluster-level by efficient power and resource provisioning. Towards this end, we provide a thorough analysis of energy proportionality at the server and cluster-level and provide insight into the power saving opportunity and mechanisms to improve energy proportionality. Specifically, we make the following contribution at the server-level using enterprise-class workloads. We analyze the average power consumption of the full system as well as the subsystems and describe the energy proportionality of these components, characterize the instantaneous power profile of enterprise-class workloads using the on-chip energy meters, design a runtime system based on a load prediction model and an optimization framework to set the appropriate power constraints to meet specific performance targets and then present the effects of our runtime system on energy proportionality, average power, performance and instantaneous power consumption of enterprise applications. We then make the following contributions at the cluster-level. Using data serving, web searching and data caching as our representative workloads, we first analyze the component-level power distribution on a cluster. Second, we characterize how these workloads utilize the cluster. Third, we analyze the potential of power provisioning techniques (i.e., active low-power, turbo and idle low-power modes) to improve the energy proportionality. We then describe the ability of active low-power modes to provide trade-offs in power and latency. Finally, we compare and contrast power provisioning and resource provisioning techniques. This thesis sheds light on mechanisms to tune the power provisioned for a system under strict performance targets and opportunities to improve energy proportionality and instantaneous power consumption via efficient power and resource provisioning at the server- and cluster-level. / Ph. D.
9

Cost-Effective Resource Configurations for Executing Data-Intensive Workloads in Public Clouds

Mian, Rizwan 04 December 2013 (has links)
The rate of data growth in many domains is straining our ability to manage and analyze it. Consequently, we see the emergence of computing systems that attempt to efficiently process data-intensive applications or I/O bound applications with large data. Cloud computing offers “infinite” resources on demand, and on a pay-as-you-go basis. As a result, it has gained interest for large-scale data processing. Given this supposedly infinite resource set, we need a provisioning process to determine appropriate resources for data processing or workload execution. We observe that the prevalent data processing architectures do not usually employ provisioning techniques available in a public cloud, and existing provisioning techniques have largely ignored data-intensive applications in public clouds. In this thesis, we take a step towards bridging the gap between existing data processing approaches and the provisioning techniques available in a public cloud, such that the monetary cost of executing data-intensive workloads is minimized. We formulate the problem of provisioning and include constructs to exploit a cloud’s elasticity to include any number of resources to host a multi-tenant database system prior to execution. The provisioning is modeled as a search problem, and we use standard search heuristics to solve it. We propose a novel framework for resource provisioning in a cloud environment. Our framework allows pluggable cost and performance models. We instantiate the framework by developing various search algorithms, cost and performance models to support the search for an effective resource configuration. We consider data-intensive workloads that consist of transactional, analytical or mixed workloads for evaluation, and access multiple database tenants. The workloads are based on standard TPC benchmarks. In addition, the user preferences on response time or throughput are expressed as constraints. Our propositions and their results are validated in a real public cloud, namely the Amazon cloud. The evaluation supports our claim that the framework is an effective tool for provisioning database workloads in a public cloud with minimal dollar cost. / Thesis (Ph.D, Computing) -- Queen's University, 2013-11-30 19:30:39.427
10

HPC scheduling in a brave new world

Gonzalo P., Rodrigo January 2017 (has links)
Many breakthroughs in scientific and industrial research are supported by simulations and calculations performed on high performance computing (HPC) systems. These systems typically consist of uniform, largely parallel compute resources and high bandwidth concurrent file systems interconnected by low latency synchronous networks. HPC systems are managed by batch schedulers that order the execution of application jobs to maximize utilization while steering turnaround time. In the past, demands for greater capacity were met by building more powerful systems with more compute nodes, greater transistor densities, and higher processor operating frequencies. Unfortunately, the scope for further increases in processor frequency is restricted by the limitations of semiconductor technology. Instead, parallelism within processors and in numbers of compute nodes is increasing, while the capacity of single processing units remains unchanged. In addition, HPC systems’ memory and I/O hierarchies are becoming deeper and more complex to keep up with the systems’ processing power. HPC applications are also changing: the need to analyze large data sets and simulation results is increasing the importance of data processing and data-intensive applications. Moreover, composition of applications through workflows within HPC centers is becoming increasingly important. This thesis addresses the HPC scheduling challenges created by such new systems and applications. It begins with a detailed analysis of the evolution of the workloads of three reference HPC systems at the National Energy Research Supercomputing Center (NERSC), with a focus on job heterogeneity and scheduler performance. This is followed by an analysis and improvement of a fairshare prioritization mechanism for HPC schedulers. The thesis then surveys the current state of the art and expected near-future developments in HPC hardware and applications, and identifies unaddressed scheduling challenges that they will introduce. These challenges include application diversity and issues with workflow scheduling or the scheduling of I/O resources to support applications. Next, a cloud-inspired HPC scheduling model is presented that can accommodate application diversity, takes advantage of malleable applications, and enables short wait times for applications. Finally, to support ongoing scheduling research, an open source scheduling simulation framework is proposed that allows new scheduling algorithms to be implemented and evaluated in a production scheduler using workloads modeled on those of a real system. The thesis concludes with the presentation of a workflow scheduling algorithm to minimize workflows’ turnaround time without over-allocating resources. / <p>Work also supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) and we used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, supported by the Officece of Science of the U.S. Department of Energy, both under Contract No. DE-AC02-05CH11231.</p>

Page generated in 0.0399 seconds