• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2553
  • 1023
  • 403
  • 270
  • 95
  • 76
  • 52
  • 45
  • 44
  • 43
  • 40
  • 37
  • 29
  • 27
  • 22
  • Tagged with
  • 5665
  • 1748
  • 1275
  • 829
  • 825
  • 744
  • 741
  • 724
  • 614
  • 593
  • 546
  • 533
  • 522
  • 489
  • 478
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

CAULDRONS: An Abstraction for Concurrent Problem Solving

Haase, Ken 01 September 1986 (has links)
This research extends a tradition of distributed theories of mind into the implementation of a distributed problem solver. In this problem solver a number of ideas from Minsky's Society of Mind are implemented and are found to provide powerful abstractions for the programming of distributed systems. These abstractions are the cauldron, a mechanism for instantiating reasoning contexts, the frame, a way of modularly describing those contexts and the goal-node, a mechanism for bringing a particular context to bear on a specific task. The implementation of both these abstractions and the distributed problem solver in which they run is described, accompanied by examples of their application to various domains.
152

Persistent Nodes for Reliable Memory in Geographically Local Networks

Beal, Jacob 15 April 2003 (has links)
A Persistent Node is a redundant distributed mechanism for storing a key/value pair reliably in a geographically local network. In this paper, I develop a method of establishing Persistent Nodes in an amorphous matrix. I address issues of construction, usage, atomicity guarantees and reliability in the face of stopping failures. Applications include routing, congestion control, and data storage in gigascale networks.
153

Transportation system modeling using the High Level Architecture

Melouk, Sharif 30 September 2004 (has links)
This dissertation investigates the High Level Architecture (HLA) as a possible distributed simulation framework for transportation systems. The HLA is an object-oriented approach to distributed simulations developed by the Department of Defense (DoD) to handle the issues of reuse and interoperability of simulations. The research objectives are as follows: (1) determine the feasibility of making existing traffic management simulation environments HLA compliant; (2) evaluate the usability of existing HLA support software in the transportation arena; (3) determine the usability of methods developed by the military to test for HLA compliance on traffic simulation models; and (4) examine the possibility of using the HLA to create Internet-based virtual environments for transportation research. These objectives were achieved in part via the development of a distributed simulation environment using the HLA. Two independent traffic simulation models (federates) comprised the environment (federation). A CORSIM federate models a freeway feeder road with an on-ramp while an Arena federate models a tollbooth exchange.
154

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors. The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology. The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang. Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems. No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson. Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.
155

Fully Distributed Register Files for Heterogeneous Clustered Microarchitectures

Bunchua, Santithorn 09 July 2004 (has links)
Conventional processor design utilizes a central register file and a bypass network to deliver operands to and from functional units, which cannot scale to a large number of functional units. As more functional units are integrated into a processor, the number of ports on a register file grows linearly while area, delay, and energy consumption grow even more rapidly. Physical properties of a bypass network scale in a similar manner. In this dissertation, a fully distributed register file organization is presented to overcome this limitation by relying on small register files with fewer ports and localized operand bypasses. Unlike other clustered microarchitectures, each cluster features a small single-issue functional unit coupled with a small local register file. Several clusters are used, and each of them can be different. All register files are connected through a register transfer network that supports multicast communications. Techniques to support distributed register file operations are presented for both dynamically and statically scheduled processors. These include the eager and multicast register transfer mechanisms in the dynamic approach and the global data routing with multicasting algorithm in the static approach. Although this organizaiton requires additional cycles to execute a program, it is compensated by significant savings obtained through smaller area, faster operand access time, and lower energy consumption. With faster operating frequency and more efficient hardware implementation, overall performance can be improved. Additionally, the fully distributed register file organization is applied to an ILP-SIMD processing element, which is the major building block of a massively parallel media processor array. The results show reduction in die area, which can be utilized to implement additional processing elements. Consequently, performance is improved through a higher degree of data parallelism through a larger processor array. In summary, the fully distributed register file architecture permits future processors to scale to a large number of functional units. This is especially desirable in high-throughput processors such as wide-issue processors and multithreaded processors. Moreover, localized communication is highly desirable in the transition to future deep submicron technologies since long wire is a critical issue in processes with extremely small feature sizes.
156

Construction of Distributed Method for Analyzing a Large Number of Sequence Data: Using Influenza A Virus Protein Sequences as Examples

Tu, Guo-Hua 01 November 2010 (has links)
Abstract Analyzing the eight genomic protein segments of influenza A virus could provide a better understanding of this specific virus. Along with the progress of computer technology, numerous influenza A virus protein sequences were available in various internet data banks. However, analyzing a large number of protein sequences is a cumbersome work. Thus it is necessary to develop new tools with algorithmic method. This study used distributed method to develop a protein sequence clustering analysis software by JAVA programming language. The software could split a large number of protein sequences downloaded from NCBI into several files. Because of these individual files were calculated at the same time, therefore it could reduce the time in process of comparison and analysis. Finally, we used PRIMER 5 program to analyze these individual files and produce similarity analysis chart diagrams of MDS and UPGMA. In The similarity analysis chart diagrams indicated high homology in genomic protein segments of influenza A virus from year 1997 to 2006. The analysis also showed the genomic protein segments of influenza A virus are similar in Asia countries. However, the similarity between Asian countries and China is not significant. From analyzing the hosts, the genomic protein segments of influenza A virus are highly similar in species such like birds, chickens, ducks and pigs. Therefore, our data strongly support that the possibility of influenza A viruses can cross species to infect humans.
157

An Investigation of the Utilization of Smart Meter Data to Adapt Overcurrent Protection for Radial Distribution Systems with a High Penetration of Distributed Generation

Douglin, Richard Henry 2012 May 1900 (has links)
The future of electric power distribution systems (DSs) is one that incorporates extensive amounts of advanced metering, distribution automation, and distributed generation technologies. Most DSs were designed to be radial systems and the major philosophies of their protection, namely, selectivity and sensitivity, were easily achieved. Settings for overcurrent protective devices (OCPDs) were static and based on the maximum load downstream of its location, with little concern of major configuration changes. However, the integration of distribution generators (DGs) in radial distributions systems (RDSs) causes bidirectional power flows and varying short circuit currents to be sensed by protective devices, thereby affecting these established protection principles. Several researchers have investigated methods to preserve the selectivity of overcurrent protection coordination in RDSs with DGs, but at the expense of protective device sensitivity due to an inherent change in system configuration. This thesis presents an investigation to adapt the pickup settings of the substation relay, based on configuration changes in a DS with DGs, using smart meter data from the prior year. An existing protection scheme causes the faulted areas of DSs with DGs to revert to a radial configuration, thereby allowing conventional OCPDs to isolate faults. Based on the location of the fault, the created radial segments are known and vary in length. The proposed methodology involves using demand information available via smart metering, to determine the seasonal maximum diversified demands in each of the radial segments that are formed. These seasonal maximum diversified demands are used to yield several pickup settings for the substation overcurrent relay of the DS. The existing protection approach enables the selectivity of radial overcurrent protection coordination to be maintained; the sensitivity of the substation relay is improved by adapting its pickup settings based on seasonal demand and system configuration changes. The results of the studies are reported through simulation in EMTP™ /PSCAD® using a multi-feeder test system that includes DGs and smart meters located at the secondary distribution load level. The results show that using seasonal settings for the substation relay based on configuration changes in a DS with DGs can improve the sensitivity of the substation relay.
158

Cost-Benefit Assessments of Distributed Power Generation

Yu, Sen-Yen 10 July 2003 (has links)
Abstract The most common application of Distributed Generation (DG) is for reliability reasons. After experiencing an interruption, backup generators can be started to supply electricity to critical loads. The next most common application for DG is peak load shaving. During time periods of high energy demand or high energy prices, on-site generators are started up and used to serve part of the on-site loads. So DG can increase reliability of power supply, reduce loss of interruption and solve the problem of peak loads. Due to the high costs, only a few were installed. In order to investigate their economic values, in this thesis, several economic assessment methods are used to evaluate the cost-benefit of DG. Test results have revealed that, unless it is for environment protection reasons, the investment of DG is of little value if the fuel cost is high, and the electricity and the customer interruption costs are low. Keyword : Distributed Generation¡Mpeak load shaving.
159

Distributed Detection in UWB-IR Sensor Networks with Randomization of the Number of Pulses

Chang, Yung-Lin 04 August 2008 (has links)
In this thesis, we consider a distributed detection problem in wireless sensor networks (WSNs) using ultrawide bandwidth (UWB) communications. Due to the severe restrictions on power consumption, energy efficiency becomes a critical design issue in WSNs. UWB technology has low-power transceivers, low-complexity and low-cost circuitry which are well suited to the physical layer requirements for WSNs. In a typical parallel fusion network, local decisions are made by local sensors and transmitted through a wireless channel to a fusion center, where the final decision is made. In this thesis, we control the number of UWB pulses to achieve the energy efficient distributed detection. We first theoretically characterize the performance of distributed detection using UWB communications. Both AWGN and fading channels are considered. Based on the analysis, we then obtain the minimum number of the pulses per detection to meet the required performance. To achieve a near-optimal design, we further propose a multiple access technique based on the random number of UWB pulses. Finally, the performance evaluation is provided to demonstrate the advantage of our design.
160

Making reliable distributed systems in the presence of software errors

Armstrong, Joe January 2003 (has links)
<p>The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors.</p><p>The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology.</p><p>The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang.</p><p>Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems.</p><p>No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson.</p><p>Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.</p>

Page generated in 0.2309 seconds