151 |
Towards a Framework for DHT Distributed ComputingRosen, Andrew 12 August 2016 (has links)
Distributed Hash Tables (DHTs) are protocols and frameworks used by peer-to-peer (P2P) systems. They are used as the organizational backbone for many P2P file-sharing systems due to their scalability, fault-tolerance, and load-balancing properties. These same properties are highly desirable in a distributed computing environment, especially one that wants to use heterogeneous components. We show that DHTs can be used not only as the framework to build a P2P file-sharing service, but as a P2P distributed computing platform. We propose creating a P2P distributed computing framework using distributed hash tables, based on our prototype system ChordReduce. This framework would make it simple and efficient for developers to create their own distributed computing applications. Unlike Hadoop and similar MapReduce frameworks, our framework can be used both in both the context of a datacenter or as part of a P2P computing platform. This opens up new possibilities for building platforms to distributed computing problems. One advantage our system will have is an autonomous load-balancing mechanism. Nodes will be able to independently acquire work from other nodes in the network, rather than sitting idle. More powerful nodes in the network will be able use the mechanism to acquire more work, exploiting the heterogeneity of the network. By utilizing the load-balancing algorithm, a datacenter could easily leverage additional P2P resources at runtime on an as needed basis. Our framework will allow MapReduce-like or distributed machine learning platforms to be easily deployed in a greater variety of contexts.
|
152 |
Natural balancing of three-phase 2-cell and 3-cell multicell convertersSalagae, Isaac Mahijoko 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The multicell inverter, being a widely used multilevel converter, has received much
attention in recent years due to problems associated with cell capacitor voltage. In this
dissertation we study the balancing problem with a focus on steady-state unbalance. This
is achieved by systematic and mathematically rigorous study of the natural balancing
mechanisms of the three-phase 2-cell and 3-cell multicell converter, undertaken by using
dynamic modelling of the multicell converter, Bennet’s geometric model, steady-state and
time constant analysis. Space vector analysis is also performed for the three-phase 2-cell
multicell converter. The theory is verified by comparing theoretical results with simulation
results / AFRIKAANSE OPSOMMING: Die multisel omkeerder as algemeen-gebruikte meervlakkige omsetter het die
afgelope jare groot belangstelling gewek op grond van die probleme wat met selkapasitor
stroomspanning geassosieer word. In hierdie proefskrif word die balanseringsprobleem met
die klem op die ewewigswanbalans bestudeer. Dit is verrig deur ’n sistematiese en streng
wiskundige studie van die natuurlike balanseringsmeganismes van die drie-fase 2-sel en 3-
sel multisel omsetter te maak. Dit is gedoen deur die gebruik van dinamiese modellering
van die multisel omsetter, Bennet se geometriese model, ewewigtoestand tydkonstante
analises, en ruimtevektoranalise is vir die drie-fase 2-sel multisel omsetter gedoen. Die
teorie word bevestig deur die teoretiese resultate met die simuleringsresultate te vergelyk
|
153 |
Scale and Concurrency of Massive File System DirectoriesPatil, Swapnil 01 May 2013 (has links)
File systems store data in files and organize these files in directories. Over decades, file systems have evolved to handle increasingly large files: they distribute files across a cluster of machines, they parallelize access to these files, they decouple data access from metadata access, and hence they provide scalable file access for high-performance applications. Sadly, most cluster-wide file systems lack any sophisticated support for large directories. In fact, most cluster file systems continue to use directories that were designed for humans, not for large-scale applications. The former use-case typically involves hundreds of files and infrequent concurrent mutations in each directory, while the latter use-case consists of tens of thousands of concurrent threads that simultaneously create large numbers of small files in a single directory at very high speeds. As a result, most cluster file systems exhibit very poor file create rate in a directory either due to limited scalability from using a single centralized directory server or due to reduced concurrency from using a system-wide synchronization mechanism.
This dissertation proposes a directory architecture called GIGA+ that enables a directory in a cluster file system to store millions of files and sustain hundreds of thousands of concurrent file creations every second. GIGA+ makes two indexing technique to scale out a growing directory on many servers and an efficient layered design to scale up performance. GIGA+ uses a hash-based, incremental partitioning algorithm that enables highly concurrent directory indexing through asynchrony and eventual consistency of the internal indexing state (while providing strong consistency guarantees to the application data). This dissertation analyzes several trade-offs between data migration overhead, load balancing effectiveness, directory scan performance, and entropy of indexing state made by the GIGA+ design, and compares them with policies used in other systems. GIGA+ also demonstrates a modular implementation that separates directory distribution from directory representation. It layers a client-server middleware, which spreads work among many GIGA+ servers, on top of a backend storage system, which manages on-disk directory representation. This dissertation studies how system behavior is tightly dependent on both the indexing scheme and the on-disk implementations, and evaluates how the system performs for different backend configurations including local and shared-disk stores. The GIGA+ prototype delivers highly scalable directory performance (that exceeds the most demanding Petascale-era requirements), provides the traditional UNIX file system interface (that can run applications without any modifications) and offers a new functionality layered on existing cluster file systems (that lack support for distributed directories)contributions: a concurrent
|
154 |
Performance variation and job enrichment in manual assembly workNg, Tat-lun, 伍達倫 January 1978 (has links)
published_or_final_version / Industrial Engineering / Master / Master of Science in Engineering
|
155 |
A critical analysis of the third circuit's test for due process violations in denials of defense witness immunity requestsKrauss, Samuel Fox 03 October 2014 (has links)
Several Supreme Court cases in the latter half of the 20th Century established a criminal defendant's due process right to put forward an effective defense. To put forward an effective defense, one must be able to introduce exculpatory evidence on one's behalf. A defendant's witness may claim the right against self-incrimination, in which case the defendant may request immunity for the witness so that he will testify. If that request is denied, a defendant's due process right to put forward an effective defense may be implicated. The refusal to grant defense witness immunity is one instance of suppression of evidence. In a string of cases in the Third Circuit, the courts have implemented a test for determining under what conditions a due process violation occurs in this situation. But, there is significant reason to believe that in implementing the test the court has relied on incorrect assumptions. This paper discusses how the court has relied on unwarranted assumptions to make due process determinations, and concludes that in so doing it has imposed too high a standard for a due process violation. First, the court interprets the test as a test for a due process violation, when there is reason to believe that the court articulating the test meant it to be a test for the appropriateness of judicially created immunity as the remedy for an existing due process violation. Second, the court makes an unwarranted assumption that any strong governmental interest countervails against a grant of witness immunity. Third, the court imposes too high a standard for determining what counts as a strong governmental interest because it does not give sufficient weight the context of the determination. These three unwarranted assumptions suggest that the court has imposed too high a standard for determining due process violations. / text
|
156 |
Effective cooperative scheduling of task-parallel applications on multiprogrammed parallel architecturesVaristeas, Georgios January 2015 (has links)
Emerging architecture designs include tens of processing cores on a single chip die; it is believed that the number of cores will reach the hundreds in not so many years from now. However, most common parallel workloads cannot fully utilize such systems. They expose fluctuating parallelism, and do not scale up indefinitely as there is usually a point after which synchronization costs outweigh the gains of parallelism. The combination of these issues suggests that large-scale systems will be either multiprogrammed or have their unneeded resources powered off.Multiprogramming leads to hardware resource contention and as a result application performance degradation, even when there are enough resources, due to negative share effects and increased bus traffic. Most often this degradation is quite unbalanced between co-runners, as some applications dominate the hardware over others. Current Operating Systems blindly provide applications with access to as many resources they ask for. This leads to over-committing the system with too many threads, memory contention and increased bus traffic. Due to the inability of the application to have any insight on system-wide resource demands, most parallel workloads will create as many threads as there are available cores. If every co-running application does the same, the system ends up with threads $N$ times the amount of cores. Threads then need to time-share cores, so the continuous context-switching and cache line evictions generate considerable overhead.This thesis proposes a novel solution across all software layers that achieves throughput optimization and uniform performance degradation of co-running applications. Through a novel fully automated approach (DVS and Palirria), task-parallel applications can accurately quantify their available parallelism online, generating a meaningful metric as parallelism feedback to the Operating System. A second component in the Operating System scheduler (Pond) uses such feedback from all co-runners to effectively partition available resources.The proposed two-level scheduling scheme ultimately achieves having each co-runner degrade its performance by the same factor, relative to how it would execute with unrestricted isolated access to the same hardware. We call this fair scheduling, departing from the traditional notion of equal opportunity which causes uneven degradation, with some experiments showing at least one application degrading its performance 10 times less than its co-runners. / <p>QC 20151016</p>
|
157 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
158 |
The EU as a balancing power in transatlantic relations : structural incentives or deliberate plans?Cladi, Lorenzo January 2011 (has links)
The purpose of this thesis is to provide a critical evaluation of the neorealist theory of international relations and its soft balancing variant through the use of case studies referring to transatlantic relations in the post-Cold War era. Each case study indicates a specific category of power. These are: i) Military - the European attempt to create a common military arm from 1991 to 2003. ii) Diplomatic - the EU's involvement in the Israeli-Palestinian conflict from 1991 to 2003. iii) Economic the EU-USA steel dispute in 2002/03. In particular, the thesis undertakes to analyse whether the EU balanced the USA in the post-Cold War period either as a result of the altered structural distribution of capabilities within the international system (unipolarity) or of a set of deliberate plans to do so. After introducing the concepts of unipolarity, hard and soft balancing, the thesis outlines three comprehensive answers that neorealist scholars have generated as to whether the USA can or cannot be balanced in the post-Cold War international system, namely the structural, the soft balancing, and the alternative structural options. Then, drawing on a defensive realist perspective, this research goes on to consider the creation of the EU as a great power in the post-Cold War era. In light of this, the thesis aims to find out whether the rise of the EU as a great power has had an impact upon unipolarity either because of structural incentives or because of a predetermination to frustrate the aggressive policies of the unipolar state. The thesis then proceeds to investigate whether throughout the case studies series the EU has balanced the USA. The case studies highlight that the EU, freed from the rigid bipolar stalemate it had been locked into during the Cold War, undertook to exert greater influence on the world stage in the post-Cold War period. To some extent the EU has accomplished this in all of the power dimensions analysed in this thesis. Nevertheless, the EU's efforts to hold sway within the international system were not aimed at addressing the relative power imbalance created by unipolarity, and there were no deliberate plans harboured by the EU to frustrate the influence of any aggressive unipolar state. Overall, this thesis found the causal logic outlined by neorealism to be convincing to the extent that the EU emerged as a great power in the post-Cold War era and had greater freedom of action under unipolarity. However, with the partial exception of the economic dimension of power, there was no persuasive evidence uncovered to support the anticipated outcome of the neorealist theoretical slant, namely that great powers tend to balance each other. Moreover, while the soft balancing claim is considered to have promise as an attempt to understand how the EU can respond to US power under unipolarity, this study did not find sufficient evidence of the EU's deliberate intentions of doing so.
|
159 |
Computing resources sensitive parallelization of neural neworks for large scale diabetes data modelling, diagnosis and predictionQi, Hao January 2011 (has links)
Diabetes has become one of the most severe deceases due to an increasing number of diabetes patients globally. A large amount of digital data on diabetes has been collected through various channels. How to utilize these data sets to help doctors to make a decision on diagnosis, treatment and prediction of diabetic patients poses many challenges to the research community. The thesis investigates mathematical models with a focus on neural networks for large scale diabetes data modelling and analysis by utilizing modern computing technologies such as grid computing and cloud computing. These computing technologies provide users with an inexpensive way to have access to extensive computing resources over the Internet for solving data and computationally intensive problems. This thesis evaluates the performance of seven representative machine learning techniques in classification of diabetes data and the results show that neural network produces the best accuracy in classification but incurs high overhead in data training. As a result, the thesis develops MRNN, a parallel neural network model based on the MapReduce programming model which has become an enabling technology in support of data intensive applications in the clouds. By partitioning the diabetic data set into a number of equally sized data blocks, the workload in training is distributed among a number of computing nodes for speedup in data training. MRNN is first evaluated in small scale experimental environments using 12 mappers and subsequently is evaluated in large scale simulated environments using up to 1000 mappers. Both the experimental and simulations results have shown the effectiveness of MRNN in classification, and its high scalability in data training. MapReduce does not have a sophisticated job scheduling scheme for heterogonous computing environments in which the computing nodes may have varied computing capabilities. For this purpose, this thesis develops a load balancing scheme based on genetic algorithms with an aim to balance the training workload among heterogeneous computing nodes. The nodes with more computing capacities will receive more MapReduce jobs for execution. Divisible load theory is employed to guide the evolutionary process of the genetic algorithm with an aim to achieve fast convergence. The proposed load balancing scheme is evaluated in large scale simulated MapReduce environments with varied levels of heterogeneity using different sizes of data sets. All the results show that the genetic algorithm based load balancing scheme significantly reduce the makespan in job execution in comparison with the time consumed without load balancing.
|
160 |
Investigation of the workforce effect of an assembly line using multi-objective optimizationLópez De La Cova Trujillo, Miguel Angel, Bertilsson, Niklas January 2016 (has links)
ABSTRACT The aim of industrial production changed from mass production at the beginning of the 20th century. Today, production flexibility determines manufacturing companies' course of action. In this sense, Volvo Group Trucks Operations is interested in meeting customer demand in their assembly lines by adjusting manpower. Thus, this investigation attempts to analyze the effect of manning on the main final assembly line for thirteen-liter heavy-duty diesel engines at Volvo Group Trucks Operations in Skövde by means of discrete-event simulation. This project presents a simulation model that simulates the assembly line. With the purpose of building the model data were required. One the one hand, qualitative data were collected to improve the knowledge in the fields related to the project topic, as well as to solve the lack of information in certain points of the project. On the other hand, simulation model programming requires quantitative data. Once the model was completed, simulation results were obtained through simulation-based optimization. This optimization process tested 50,000 different workforce scenarios to find the most efficient solutions for three different sequences. Among all results, the most interesting one for Volvo is the one which render 80% of today’s throughput with the minimum number of workers. Consequently, as a case study, a bottleneck analysis and worker performance analysis was performed for this scenario. Finally, a flexible and fully functional model that delivers the desired results was developed. These results provide a comparison among different manning scenarios considering throughput as main measurement of the main final assembly line performance. After analyzing the results, system output behavior was revealed. This behavior allows predicting optimal system output for a given number of operators.
|
Page generated in 0.074 seconds