• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2563
  • 1023
  • 403
  • 270
  • 95
  • 76
  • 52
  • 45
  • 44
  • 43
  • 40
  • 37
  • 29
  • 27
  • 23
  • Tagged with
  • 5678
  • 1751
  • 1279
  • 831
  • 826
  • 744
  • 742
  • 724
  • 616
  • 593
  • 549
  • 534
  • 523
  • 489
  • 478
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

The design of a passive monitor for distributed programs

Robbins, Arnold David January 1983 (has links)
No description available.
182

Distributed and decentralized control in fully distributed processing systems

Saponas, Timothy George January 1981 (has links)
No description available.
183

Work distribution in a fully distributed processing system

Sharp, Donald D. 05 1900 (has links)
No description available.
184

A storage management system for a reliable distributed operating system

Pitts, David Vernon 08 1900 (has links)
No description available.
185

A constraint based assignment system for protein 2D nuclear magnetic resonance

Leishman, Scott January 1995 (has links)
The interpretation of Nuclear Magnetic Resonance (NMR) spectra to produce a 3D protein structure is a difficult and time consuming task. The 3D structure is important because it largely determines the properties of the protein. Therefore, knowledge of the 3D structure can aid in the understanding of its biological function and perhaps lead to modifications which have an enhanced therapeutic activity. An NMR experiment produces a large 2D data spectrum. The important part of the spectrum consists of thousands of small cross peaks and the interpretation task is to associate a pair of hydrogen nuclei with each peak. Manual interpretation takes many months and there is considerable interest in providing (semi-) automatic tools to speed up this process. The interpretation is difficult because the number of combinations can quickly swamp the human mind and the spectrum suffers from peaks overlapping and random noise effects. ASSASSIN (A Semi-automatic Assignment System Specialising In Nmr) is a distributed problem solving system that has been implemented in the identification of peaks associated with the hydrogen nuclei at the end of long side chains. These results are then passed onto the structural assignment stage. The structural assignment stage is a feedback loop which involves the interpretation of a spectrum and the generation of preliminary structural models. These models can then be used to simplify further analysis of the spectrum. ASSASSIN uses a constraint manager implemented in CHIP to analyse this data more quickly and thoroughly than a human. The results of this work reveal that a constraint based approach is well suited to the NMR domain where the problems can be easily represented and solved efficiently.
186

Modeling, Stability Analysis, and Control of Distributed Generation in the Context of Microgrids

Nasr Azadani, Ehsan 20 May 2014 (has links)
One of the consequences of competitive electricity markets and international commitments to green energy is the fast development and increase in the amount of distributed generation (DG) in distribution grids. These DGs are resulting in a change in the nature of distribution systems from being "passive", containing only loads, to "active", including loads and DGs. This will affect the dynamic behavior of both transmission and distribution systems. There are many technical aspects and challenges of DGs that have to be properly understood and addressed. One of them is the need for adequate static and dynamic models for DG units, particularly under unbalanced conditions, to perform proper studies of distribution systems with DGs (e.g., microgrids). The primary objective of this thesis is the development and implementation of dynamic and static models of various DG technologies for stability analysis. These models allow studying systems with DGs both in the long- and short-term; thus, differential and algebraic equations of various DGs are formulated and discussed in order to integrate these models into existing power system analysis software tools. The presented and discussed models are generally based on dynamic models of different DGs for stability studies considering the dynamics of the primary governor, generators, and their interfaces and controls. A new comprehensive investigation is also presented of the effects of system unbalance on the stability of distribution grids with DG units based on synchronous generator (SG) and doubly-fed induction generator (DFIG) at different loading levels. Detailed steady-state and dynamic analyses of the system are performed. Based on voltage and angle stability studies, it is demonstrated that load unbalance can significantly affect the distribution system dynamic performance. Novel, simple, and effective control strategies based on an Unbalanced Voltage Stabilizer (UVS) are also proposed to improve the system control and the stability of unbalanced distribution systems with SG- and DFIG-based DGs.
187

Mining frequent sequences in one database scan using distributed computers

Brajczuk, Dale A. 01 September 2011 (has links)
Existing frequent-sequence mining algorithms perform multiple scans of a database, or a structure that captures the database. In this M.Sc. thesis, I propose a frequent-sequence mining algorithm that mines each database row as it reads it, so that it can potentially complete mining in the time it takes to read the database once. I achieve this by having my algorithm enumerate all sub-sequences from each row as it reads it. Since sub-sequence enumeration is a time-consuming process, I create a method to distribute the work over multiple computers, processors, and thread units, while balancing the load between all resources, and limiting the amount of communication so that my algorithm scales well in regards to the number of computers used. Experimental results show that my algorithm is effective, and can potentially complete the mining process in near the time it takes to perform one scan of the input database.
188

Parallel computational techniques for explicit finite element analysis

Sziveri, Janos January 1997 (has links)
No description available.
189

Supporting distributed computation over wide area gigabit networks

Knight, Jon January 1995 (has links)
The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions.
190

Performance modelling of replication protocols

Misra, Manoj January 1997 (has links)
This thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.

Page generated in 0.0781 seconds