• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 326
  • 316
  • 200
  • 107
  • 76
  • 67
  • 66
  • 65
  • 56
  • 45
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Measuring, modeling, and optimizing counterintuitive performance phenomena in power-scalable, parallel systems

Chang, Hung-Ching 09 April 2015 (has links)
The demands of exascale computing systems and applications have pushed for a rapid, continual design paradigm coupled with increasing design complexities from the interaction between the application, the middleware, and the underlying system hardware, which forms a breeding ground for inefficiency. This work seeks to improve system efficiency by exposing the root causes of unexpected performance slowdowns (e.g., lower performance at higher processor speeds) that occur more frequently in power-scalable systems where raw processor speed varies. More precisely, we perform an exhaustive empirical study that conclusively shows that increasing processor speed often reduces performance and wastes energy. Our experimental work shows that the frequency of occurrence and magnitude of slowdowns grow with clock frequency and parallelism, indicating that such slowdowns will increasingly be observed with trends in processor and system design. Performance speedups at lower frequencies (or slowdowns at higher frequencies) have been anecdotally observed in the prevailing literature since 2004, but no research has explained nor exploited this phenomenon. This work conclusively demonstrates that performance slowdowns during processor speedup phases can exceed 47% in common I/O workloads. Our hypothesis challenges (and ultimately debunks) a fundamental assumption in computer systems: faster processor speeds result in the same or better performance. In this work, with the use of code and kernel instrumentation, exhaustive experiments, and deep insight into the inner workings of the Linux I/O subsystem, I overcome the aforementioned challenges of variance, complexity, and nondeterminism and identify the I/O resource contention as the root cause of the slowdowns during processor speedup. Specifically, such contention comes from the Linux kernel when the journaling block device (JBD) interacts with the ext3/4 file system that introduces file write delays and file synchronization delays. To fully explain how such I/O contention causes performance anomaly, I propose analytical models of resource contention among I/O threads to describe the root cause of the observed I/O slowdowns when processors speed up. To this end, I introduce LUC, a runtime system to limit the unintended consequences of power scaling and demonstrate the effectiveness of the LUC system for two critical parallel transaction-oriented workloads, including a mail server (varMail) and online transaction processing (oltp). / Ph. D.
122

Examination of practices for computing survivability in a distributed network environment

Westmark, Vickie Renee 01 April 2002 (has links)
No description available.
123

Parallelizing a nondeterministic optimization algorithm

D'Souza, Sammy Raymond 01 January 2007 (has links)
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
124

Protecting security in cloud and distributed environments

He, Yijun, 何毅俊 January 2012 (has links)
Encryption helps to ensure that information within a session is not compromised. Authentication and access control measures ensure legitimate and appropriate access to information, and prevent inappropriate access to such resources. While encryption, authentication and access control each has its own responsibility in securing a communication session, a combination of these three mechanisms can provide much better protection for information. This thesis addresses encryption, authentication and access control related problems in cloud and distributed environments, since these problems are very common in modern organization environment. The first one is a User-friendly Location-free Encryption System for Mobile Users (UFLE). It is an encryption and authentication system which provides maximum security to sensitive data in distributed environment: corporate, home and outdoors scenarios, but requires minimum user effort (i.e. no biometric entry, or possession of cryptographic tokens) to access the data. It makes users securely and easily access data any time and any place, as well as avoids data breach due to stolen/lost laptops and USB flash. The multi-factor authentication protocol provided in this scheme is also applicable to cloud storage. The second one is a Simple Privacy-Preserving Identity-Management for Cloud Environment (SPICE). It is the first digital identity management system that can satisfy “unlinkability”and “delegatable authentication” in addition to other desirable properties in cloud environment. Unlinkability ensures that none of the cloud service providers (CSPs), even if they collude, can link the transactions of the same user. On the other hand, delegatable authentication is unique to the cloud platform, in which several CSPs may join together to provide a packaged service, with one of them being the source provider which interacts with the clients and performs authentication, while the others are receiving CSPs which will be transparent to the clients. The authentication should be delegatable such that the receiving CSP can authenticate a user without a direct communication with either the user or the registrar, and without fully trusting the source CSP. The third one addresses re-encryption based access control issue in cloud and distributed storage. We propose the first non-transferable proxy re-encryption scheme [16] which successfully achieves the non-transferable property. Proxy re-encryption allows a third-party (the proxy) to re-encrypt a ciphertext which has been encrypted for one party without seeing the underlying plaintext so that it can be decrypted by another. A proxy re-encryption scheme is said to be non-transferable if the proxy and a set of colluding delegatees cannot re-delegate decryption rights to other parties. The scheme can be utilized for a content owner to delegate content decryption rights to users in the untrusted cloud storage. The advantages of using such scheme are: decryption keys are managed by the content owner, and plaintext is always hidden from cloud provider. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
125

FDDI AS AN EMERGING STANDARD FOR TELEMETRY SYSTEMS

Taylor, Gene 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Various high performance fiber optic networks have been in existence and available now for over 10 years. Virtually all of them, until recently, have been designed around the “better idea” of some single company or engineer, and therefore were or became expensive, proprietary systems, with limited support, and limited or no growth potential. Many benefits were still realized by the users in spite of that; primarily in the areas of increased bandwidth, improved security, and the capability to have data transmission over long distances. However, after 5 years of continued development and refinement, the American National Standards Institute (ANSI) X3T9.5 committee has nearly completed acceptance and final approval of the Fiber Data Distributed Interface (FDDI) specifications. The new FDDI standards have already evidenced a tremendous and eager acceptance by the end user community, and are clearly destined to replace Ethernet as the most prevalent network media. FDDI also offers additional benefits specifically of interest to the telemetry market, and therefore represents an ideal Local Area Network (LAN) technology towards which any TM installation should migrate.
126

An Advanced Commanding and Telemetry System

Hill, Maxwell G. G. 11 1900 (has links)
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today’s satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.
127

Load balancing of irregular parallel applications on heterogeneous computing environments

Janjic, Vladimir January 2012 (has links)
Large-scale heterogeneous distributed computing environments (such as Computational Grids and Clouds) offer the promise of access to a vast amount of computing resources at a relatively low cost. In order to ease the application development and deployment on such complex environments, high-level parallel programming languages exist that need to be supported by sophisticated runtime systems. One of the main problems that these runtime systems need to address is dynamic load balancing that ensures that no resources in the environment are underutilised or overloaded with work. This thesis deals with the problem of obtaining good speedups for irregular applications on heterogeneous distributed computing environments. It focuses on workstealing techniques that can be used for load balancing during the execution of irregular applications. It specifically addresses two problems that arise during work-stealing: where thieves should look for work during the application execution and how victims should respond to steal attempts. In particular, we describe and implement a new Feudal Stealing algorithm and also we describe and implement new granularity-driven task selection policies in the SCALES simulator, which is a work-stealing simulator developed for this thesis. In addition, we present the comprehensive evaluation of the Feudal Stealing algorithm and the granularity-driven task selection policies using the simulations of a large class of regular and irregular parallel applications on a wide range of computing environments. We show how the Feudal Stealing algorithm and the granularity-driven task selection policies bring significant improvements in speedups of irregular applications, compared to the state-of-the-art work-stealing algorithms. Furthermore, we also present the implementation of the task selection policies in the Grid-GUM runtime system [AZ06] for Glasgow Parallel Haskell (GpH) [THLPJ98], in addition to the implementation in SCALES, and we also present the evaluation of this implementation on a large set of synthetic applications.
128

A distributed object model for solving irregularly structured problemson distributed systems

孫昱東, Sun, Yudong. January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
129

An architecture to support scalable distributed virtual environment systems on grid

Wang, Tianqi, 王天琦 January 2004 (has links)
published_or_final_version / abstract / Computer Science and Information Systems / Master / Master of Philosophy
130

Process migration for distributed Java computing

Wong, Ying-ying., 王瑩瑩. January 2009 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy

Page generated in 0.1026 seconds