• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 365
  • 118
  • 102
  • 40
  • 25
  • 22
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 831
  • 297
  • 136
  • 86
  • 79
  • 79
  • 77
  • 64
  • 62
  • 62
  • 60
  • 58
  • 57
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Natural balancing of three-phase 2-cell and 3-cell multicell converters

Salagae, Isaac Mahijoko 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The multicell inverter, being a widely used multilevel converter, has received much attention in recent years due to problems associated with cell capacitor voltage. In this dissertation we study the balancing problem with a focus on steady-state unbalance. This is achieved by systematic and mathematically rigorous study of the natural balancing mechanisms of the three-phase 2-cell and 3-cell multicell converter, undertaken by using dynamic modelling of the multicell converter, Bennet’s geometric model, steady-state and time constant analysis. Space vector analysis is also performed for the three-phase 2-cell multicell converter. The theory is verified by comparing theoretical results with simulation results / AFRIKAANSE OPSOMMING: Die multisel omkeerder as algemeen-gebruikte meervlakkige omsetter het die afgelope jare groot belangstelling gewek op grond van die probleme wat met selkapasitor stroomspanning geassosieer word. In hierdie proefskrif word die balanseringsprobleem met die klem op die ewewigswanbalans bestudeer. Dit is verrig deur ’n sistematiese en streng wiskundige studie van die natuurlike balanseringsmeganismes van die drie-fase 2-sel en 3- sel multisel omsetter te maak. Dit is gedoen deur die gebruik van dinamiese modellering van die multisel omsetter, Bennet se geometriese model, ewewigtoestand tydkonstante analises, en ruimtevektoranalise is vir die drie-fase 2-sel multisel omsetter gedoen. Die teorie word bevestig deur die teoretiese resultate met die simuleringsresultate te vergelyk
152

Scale and Concurrency of Massive File System Directories

Patil, Swapnil 01 May 2013 (has links)
File systems store data in files and organize these files in directories. Over decades, file systems have evolved to handle increasingly large files: they distribute files across a cluster of machines, they parallelize access to these files, they decouple data access from metadata access, and hence they provide scalable file access for high-performance applications. Sadly, most cluster-wide file systems lack any sophisticated support for large directories. In fact, most cluster file systems continue to use directories that were designed for humans, not for large-scale applications. The former use-case typically involves hundreds of files and infrequent concurrent mutations in each directory, while the latter use-case consists of tens of thousands of concurrent threads that simultaneously create large numbers of small files in a single directory at very high speeds. As a result, most cluster file systems exhibit very poor file create rate in a directory either due to limited scalability from using a single centralized directory server or due to reduced concurrency from using a system-wide synchronization mechanism. This dissertation proposes a directory architecture called GIGA+ that enables a directory in a cluster file system to store millions of files and sustain hundreds of thousands of concurrent file creations every second. GIGA+ makes two indexing technique to scale out a growing directory on many servers and an efficient layered design to scale up performance. GIGA+ uses a hash-based, incremental partitioning algorithm that enables highly concurrent directory indexing through asynchrony and eventual consistency of the internal indexing state (while providing strong consistency guarantees to the application data). This dissertation analyzes several trade-offs between data migration overhead, load balancing effectiveness, directory scan performance, and entropy of indexing state made by the GIGA+ design, and compares them with policies used in other systems. GIGA+ also demonstrates a modular implementation that separates directory distribution from directory representation. It layers a client-server middleware, which spreads work among many GIGA+ servers, on top of a backend storage system, which manages on-disk directory representation. This dissertation studies how system behavior is tightly dependent on both the indexing scheme and the on-disk implementations, and evaluates how the system performs for different backend configurations including local and shared-disk stores. The GIGA+ prototype delivers highly scalable directory performance (that exceeds the most demanding Petascale-era requirements), provides the traditional UNIX file system interface (that can run applications without any modifications) and offers a new functionality layered on existing cluster file systems (that lack support for distributed directories)contributions: a concurrent
153

Performance variation and job enrichment in manual assembly work

Ng, Tat-lun, 伍達倫 January 1978 (has links)
published_or_final_version / Industrial Engineering / Master / Master of Science in Engineering
154

A critical analysis of the third circuit's test for due process violations in denials of defense witness immunity requests

Krauss, Samuel Fox 03 October 2014 (has links)
Several Supreme Court cases in the latter half of the 20th Century established a criminal defendant's due process right to put forward an effective defense. To put forward an effective defense, one must be able to introduce exculpatory evidence on one's behalf. A defendant's witness may claim the right against self-incrimination, in which case the defendant may request immunity for the witness so that he will testify. If that request is denied, a defendant's due process right to put forward an effective defense may be implicated. The refusal to grant defense witness immunity is one instance of suppression of evidence. In a string of cases in the Third Circuit, the courts have implemented a test for determining under what conditions a due process violation occurs in this situation. But, there is significant reason to believe that in implementing the test the court has relied on incorrect assumptions. This paper discusses how the court has relied on unwarranted assumptions to make due process determinations, and concludes that in so doing it has imposed too high a standard for a due process violation. First, the court interprets the test as a test for a due process violation, when there is reason to believe that the court articulating the test meant it to be a test for the appropriateness of judicially created immunity as the remedy for an existing due process violation. Second, the court makes an unwarranted assumption that any strong governmental interest countervails against a grant of witness immunity. Third, the court imposes too high a standard for determining what counts as a strong governmental interest because it does not give sufficient weight the context of the determination. These three unwarranted assumptions suggest that the court has imposed too high a standard for determining due process violations. / text
155

Effective cooperative scheduling of task-parallel applications on multiprogrammed parallel architectures

Varisteas, Georgios January 2015 (has links)
Emerging architecture designs include tens of processing cores on a single chip die; it is believed that the number of cores will reach the hundreds in not so many years from now. However, most common parallel workloads cannot fully utilize such systems. They expose fluctuating parallelism, and do not scale up indefinitely as there is usually a point after which synchronization costs outweigh the gains of parallelism. The combination of these issues suggests that large-scale systems will be either multiprogrammed or have their unneeded resources powered off.Multiprogramming leads to hardware resource contention and as a result application performance degradation, even when there are enough resources, due to negative share effects and increased bus traffic. Most often this degradation is quite unbalanced between co-runners, as some applications dominate the hardware over others. Current Operating Systems blindly provide applications with access to as many resources they ask for. This leads to over-committing the system with too many threads, memory contention and increased bus traffic. Due to the inability of the application to have any insight on system-wide resource demands, most parallel workloads will create as many threads as there are available cores. If every co-running application does the same, the system ends up with threads $N$ times the amount of cores. Threads then need to time-share cores, so the continuous context-switching and cache line evictions generate considerable overhead.This thesis proposes a novel solution across all software layers that achieves throughput optimization and uniform performance degradation of co-running applications. Through a novel fully automated approach (DVS and Palirria), task-parallel applications can accurately quantify their available parallelism online, generating a meaningful metric as parallelism feedback to the Operating System. A second component in the Operating System scheduler (Pond) uses such feedback from all co-runners to effectively partition available resources.The proposed two-level scheduling scheme ultimately achieves having each co-runner degrade its performance by the same factor, relative to how it would execute with unrestricted isolated access to the same hardware. We call this fair scheduling, departing from the traditional notion of equal opportunity which causes uneven degradation, with some experiments showing at least one application degrading its performance 10 times less than its co-runners. / <p>QC 20151016</p>
156

Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations

De Grande, Robson E. 26 July 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
157

The EU as a balancing power in transatlantic relations : structural incentives or deliberate plans?

Cladi, Lorenzo January 2011 (has links)
The purpose of this thesis is to provide a critical evaluation of the neorealist theory of international relations and its soft balancing variant through the use of case studies referring to transatlantic relations in the post-Cold War era. Each case study indicates a specific category of power. These are: i) Military - the European attempt to create a common military arm from 1991 to 2003. ii) Diplomatic - the EU's involvement in the Israeli-Palestinian conflict from 1991 to 2003. iii) Economic the EU-USA steel dispute in 2002/03. In particular, the thesis undertakes to analyse whether the EU balanced the USA in the post-Cold War period either as a result of the altered structural distribution of capabilities within the international system (unipolarity) or of a set of deliberate plans to do so. After introducing the concepts of unipolarity, hard and soft balancing, the thesis outlines three comprehensive answers that neorealist scholars have generated as to whether the USA can or cannot be balanced in the post-Cold War international system, namely the structural, the soft balancing, and the alternative structural options. Then, drawing on a defensive realist perspective, this research goes on to consider the creation of the EU as a great power in the post-Cold War era. In light of this, the thesis aims to find out whether the rise of the EU as a great power has had an impact upon unipolarity either because of structural incentives or because of a predetermination to frustrate the aggressive policies of the unipolar state. The thesis then proceeds to investigate whether throughout the case studies series the EU has balanced the USA. The case studies highlight that the EU, freed from the rigid bipolar stalemate it had been locked into during the Cold War, undertook to exert greater influence on the world stage in the post-Cold War period. To some extent the EU has accomplished this in all of the power dimensions analysed in this thesis. Nevertheless, the EU's efforts to hold sway within the international system were not aimed at addressing the relative power imbalance created by unipolarity, and there were no deliberate plans harboured by the EU to frustrate the influence of any aggressive unipolar state. Overall, this thesis found the causal logic outlined by neorealism to be convincing to the extent that the EU emerged as a great power in the post-Cold War era and had greater freedom of action under unipolarity. However, with the partial exception of the economic dimension of power, there was no persuasive evidence uncovered to support the anticipated outcome of the neorealist theoretical slant, namely that great powers tend to balance each other. Moreover, while the soft balancing claim is considered to have promise as an attempt to understand how the EU can respond to US power under unipolarity, this study did not find sufficient evidence of the EU's deliberate intentions of doing so.
158

Computing resources sensitive parallelization of neural neworks for large scale diabetes data modelling, diagnosis and prediction

Qi, Hao January 2011 (has links)
Diabetes has become one of the most severe deceases due to an increasing number of diabetes patients globally. A large amount of digital data on diabetes has been collected through various channels. How to utilize these data sets to help doctors to make a decision on diagnosis, treatment and prediction of diabetic patients poses many challenges to the research community. The thesis investigates mathematical models with a focus on neural networks for large scale diabetes data modelling and analysis by utilizing modern computing technologies such as grid computing and cloud computing. These computing technologies provide users with an inexpensive way to have access to extensive computing resources over the Internet for solving data and computationally intensive problems. This thesis evaluates the performance of seven representative machine learning techniques in classification of diabetes data and the results show that neural network produces the best accuracy in classification but incurs high overhead in data training. As a result, the thesis develops MRNN, a parallel neural network model based on the MapReduce programming model which has become an enabling technology in support of data intensive applications in the clouds. By partitioning the diabetic data set into a number of equally sized data blocks, the workload in training is distributed among a number of computing nodes for speedup in data training. MRNN is first evaluated in small scale experimental environments using 12 mappers and subsequently is evaluated in large scale simulated environments using up to 1000 mappers. Both the experimental and simulations results have shown the effectiveness of MRNN in classification, and its high scalability in data training. MapReduce does not have a sophisticated job scheduling scheme for heterogonous computing environments in which the computing nodes may have varied computing capabilities. For this purpose, this thesis develops a load balancing scheme based on genetic algorithms with an aim to balance the training workload among heterogeneous computing nodes. The nodes with more computing capacities will receive more MapReduce jobs for execution. Divisible load theory is employed to guide the evolutionary process of the genetic algorithm with an aim to achieve fast convergence. The proposed load balancing scheme is evaluated in large scale simulated MapReduce environments with varied levels of heterogeneity using different sizes of data sets. All the results show that the genetic algorithm based load balancing scheme significantly reduce the makespan in job execution in comparison with the time consumed without load balancing.
159

Investigation of the workforce effect of an assembly line using multi-objective optimization

López De La Cova Trujillo, Miguel Angel, Bertilsson, Niklas January 2016 (has links)
ABSTRACT The aim of industrial production changed from mass production at the beginning of the 20th century. Today, production flexibility determines manufacturing companies' course of action. In this sense, Volvo Group Trucks Operations is interested in meeting customer demand in their assembly lines by adjusting manpower. Thus, this investigation attempts to analyze the effect of manning on the main final assembly line for thirteen-liter heavy-duty diesel engines at Volvo Group Trucks Operations in Skövde by means of discrete-event simulation. This project presents a simulation model that simulates the assembly line. With the purpose of building the model data were required. One the one hand, qualitative data were collected to improve the knowledge in the fields related to the project topic, as well as to solve the lack of information in certain points of the project. On the other hand, simulation model programming requires quantitative data. Once the model was completed, simulation results were obtained through simulation-based optimization. This optimization process tested 50,000 different workforce scenarios to find the most efficient solutions for three different sequences. Among all results, the most interesting one for Volvo is the one which render 80% of today’s throughput with the minimum number of workers. Consequently, as a case study, a bottleneck analysis and worker performance analysis was performed for this scenario. Finally, a flexible and fully functional model that delivers the desired results was developed. These results provide a comparison among different manning scenarios considering throughput as main measurement of the main final assembly line performance. After analyzing the results, system output behavior was revealed. This behavior allows predicting optimal system output for a given number of operators.
160

Improve the Performance and Scalability of RAID-6 Systems Using Erasure Codes

Wu, Chentao 15 November 2012 (has links)
RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called Maximum Distance Separable (MDS) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, existing RAID-6 systems suffer several important problems on performance and scalability, such as low write performance, unbalanced I/O, and high migration cost in the scaling process. To address these problems, in this dissertation, we design techniques for high performance and scalable RAID-6 systems. It includes high performance and load balancing erasure codes (H-Code and HDP Code), and Stripe-based Data Migration (SDM) scheme. We also propose a flexible MDS Scaling Framework (MDS-Frame), which can integrate H-Code, HDP Code and SDM scheme together. Detailed evaluation results are also given in this dissertation.

Page generated in 0.0691 seconds