• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2520
  • 1021
  • 403
  • 270
  • 95
  • 74
  • 52
  • 45
  • 43
  • 43
  • 40
  • 37
  • 29
  • 27
  • 22
  • Tagged with
  • 5619
  • 1725
  • 1250
  • 820
  • 814
  • 738
  • 723
  • 719
  • 609
  • 576
  • 541
  • 530
  • 521
  • 485
  • 472
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Scheduling distributed data-intensive applications on global grids /

Venugopal, Srikumar. January 2006 (has links)
Thesis (Ph.D.)--University of Melbourne, Dept. of Computer Science and Software Engineering, 2006. / Typescript. Includes bibliographical references (leaves 189-207).
72

Toward the development of control software for an operator interface in the distributed automation environment /

Jayaraman, Usha. January 1992 (has links)
Report (M.S.)--Virginia Polytechnic Institute and State University. M.S. 1992. / Abstract. Includes bibliographical references (leaves 94-97). Also available via the Internet.
73

URA a universal data replication architecture /

Zheng, Jiandan, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
74

Query processing in distributed database systems /

Unnava, Vasundhara. January 1992 (has links)
Thesis (Ph. D.)--Ohio State University, 1992. / Includes bibliographical references (leaves 116-119). Available online via OhioLINK's ETD Center.
75

Joint intentions as a model of multi-agent cooperation in complex dynamic environments

Jennings, Nick R. January 1992 (has links)
Computer-based systems are being used to tackle increasingly complex problems in ever more demanding domains. The size and amount of knowledge needed by such systems means they are becoming unwieldy and difficult to engineer into reliable, consistent products. One paradigm for overcoming this barrier is to decompose the problem into smaller more manageable components which can communicate and cooperate at the level of sharing processing responsibilities and information. Until recently, research in multi-agent systems has been based on ad hoc models of action and interaction; however, the notion of intentions is beginning to emerge as a prime candidate upon which a sound theory could be based. This research develops a new model of joint intentions as a means of describing the activities of groups of agents working collaboratively. The model stresses the role of intentions in controlling agents� current and future actions; defining preconditions which must be satisfied before joint problem solving can commence and prescribing how individual agents should behave once it has been established. Such a model becomes especially important in dynamic environments in which agents may possess neither complete nor correct beliefs about their world or other agents, have changeable goals and fallible actions and be subject to interruption from external events. The theory has been implemented in a general purpose cooperation framework, called GRATE*, and applied to the real-world problem of electricity transportation management. In this application, individual problem solvers have to take decisions using partial, imprecise information and respond to an ever changing external world. This fertile environment enabled the quantitative benefits of the theory to be assessed and comparisons with other models of collaborative problem solving to be undertaken. These experiments highlighted the high degree of coherence attained by GRATE* problem solving groups, even in the most dynamic and unpredictable application contexts.
76

An Automated Multi-agent Framework For Testing Distributed System

Haque, Ehsanul 01 May 2013 (has links)
Testing is a part of the software development life cycle (SDLC) which ensures the quality and efficiency of the software. It gives confident to the developers about the system by early detecting faults of the system. Therefore, it is considered as one of the most important part of the SDLC. Unfortunately, testing is often neglected by the developers mainly because of the time and cost of the testing process. Testing involves lots of manpower, specially for a large system, such as distributed system. On the other hand, it is more common to have bugs in a large system than a small centralized system and therefore there is no alternative of testing to find and fix the bugs. The situation gets worst if the developer follows one of the most powerful development process called continuous integration process. This is because developers need to write the test cases in each cycle of the continuous integration process which increase the development time drastically. As a result, testing often neglected for large systems. This is an alarming situation because distributed system is one of the most popular and widely accepted system in both industries and academia. Therefore, this is one of the highly pressured areas where lot of developers engaged to provide distributed software solutions. If these systems delivered to the users untested, there is a high possibility that we will end up with a lot of buggy systems every year. There are also a very few number of testing framework exist in the market for testing distributed system compared to the number of testing framework exists for traditional system. The main reason behind this is, testing a distributed system is far difficult and complex process compares to test a centralized system. Most common technique to test a centralized system is to test the middleware which might not be the case for distributed system. Unlike the traditional system, distributed system can be resides in multiple location of different corners of the world. Therefore, testing and verification of distributed systems are difficult. In addition to this, distributed systems have some basic properties such as fault tolerance, availability, concurrency, responsiveness, security, etc. which makes the testing process more complex and difficult. This research proposed a multi-agent based testing framework for distributed system where multiple agent communicate with each other and accomplish the whole testing process for a distributed system. The bullet proof idea of testing centralizes system has been reused partially to design the framework so that developers will be more comfortable to use the framework. The research also focused on the automation of testing process which will reduce the time and cost of the whole testing process and relief the developer from re-generating the same test cases over and over before each release of the application. This paper briefly described the architecture of the framework and communication process between multiple agents.
77

SIZE OPTIMIZATION OF PHOTOVOLTAIC ARRAYS AND ENERGY STORAGE IN A DISTRIBUTION FEEDER

Smith, Steven 01 May 2018 (has links)
As utilities become more interested in using renewable energy to power the grid, the problem becomes how to size and locate the generation facilities. This thesis approaches the idea of using distributed medium scale generation facilities at the distribution feeder level. We propose an algorithm to determine the optimum size of a photovoltaic(PV) array and an energy storage system for a distribution feeder. The cost of operating a feeder is quantified by considering the net load at the substation, voltage changes, load following, and the initial cost of implementing a photovoltaic system and a battery energy storage system. The PV inverter is utilized in order to improve the voltage on the circuit and is sized proportionally to the array size. The energy storage system operates in peak shaving and load following capacities in order to reduce stress on current generation facilities. The algorithm then operates to minimize the total cost of the feeder operation for a year by sizing these distributed generation resources utilizing particle swarm optimization. Optimization of a real-world system yielding results where the power at the substation (including all losses) is reduced by 5.39% over the course of a year and the average voltage drop on the circuit is improved by 50.17% using the proposed photovoltaic inverter control scheme.
78

Coding, Computing, and Communication in Distributed Storage Systems

Gerami, Majid January 2016 (has links)
Conventional studies in communication networks mostly focus on securely and reliably transmitting  data from a source node (or multiple source nodes) to multiple destinations. A more general problem appears when the destination nodes are interested in obtaining  functions of the data available in distributed source nodes. For obtaining a function, transmitting all the data to a destination node and then computing the function might be inefficient. In order to exploit the network resources efficiently, the general problem offers distributed computing in combination with coding and communication. This problem has applications in distributed systems, e.g., in wireless sensor networks, in distributed storage systems, and in distributed computing systems. Following this general problem formulation, we study the optimal and secure recovery of the lost data in storage nodes and in reconstructing a version of a file in distributed storage systems.   The significance of this study is due to the fact that the new trends in communications including big data, Internet of things, low latency, and high reliability communications challenge the existing centralized data storage systems. Distributed storage systems can rectify those issues by  distributing  thousands of storage nodes (possibly around the globe), and then benefiting users by bringing data to their proximity.  Yet, distributing the storage nodes brings new challenges. In these distributed systems, where storage nodes  are connected through links and servers, communication plays a main role in their performance. In addition,  a part of network may fail or due to communication failure or delay there might exist multi versions of a file. Moreover, an intruder can overhear the communications between storage nodes and obtain some information about the stored data. Therefore, there are challenges on  reliability, security, availability, and consistency.   To increase reliability, systems need to store redundant data in storage nodes and employ error control codes. To maintain the  reliability  in a dynamic environment where storage nodes can fail, the system should have an autonomous repair process. Namely, it should regenerate the failed nodes by the help of other storage nodes. The repair process demands bandwidth, energy, or in general transmission costs.  We propose novel techniques to reduce the repair cost in distributed storage systems.   First, we propose {surviving nodes cooperation} in repair, meaning that surviving nodes can combine their received data with their own stored data and then transmit toward the new node. In addition, we study the repair problem in multi-hop networks and consider the cost of transmitting data between storage nodes.  While classical repair model assumes the availability of direct links between the new node and surviving nodes, we consider that such links may not be available either due to failure or their costs.  We formulate an optimization problem to minimize the repair cost and compare two systems, namely with and without surviving nodes cooperation.   Second, we study the repair problem where the links between storage nodes are lossy e.g., due to server congestion, load balancing, or unreliable physical layer (wireless links).  We model the lossy links by packet erasure channels and then derive the fundamental bandwidth-storage tradeoff in packet erasure networks. In addition, we propose dedicated-for-repair storage nodes to reduce the repair-bandwidth.   Third, we generalize the repair model by proposing the concept of partial repair. That is, storage nodes may lose parts of their stored data. Then in partial repair, the lost data is recovered by exchanging data between storage nodes and using the available data in storage nodes as side information. For efficient partial-repair,  we propose two-layer coding in distributed storage systems and then we derive the optimal bandwidth in partial repair.   Fourth, we study security in distributed storage systems.  We investigate security in partial repair. In particular, we propose codes that make the partial repair secure in the senses of strong and weak information-theoretic security definitions.   Finally, we study consistency in distributed storage systems. Consistency means that distinct users obtain the latest version of a file in a system that stores multi versions of a file. Given the probability of receiving a version by a storage node and the constraint on the node storage space, we aim to find the optimal encoding of multi versions of a file that maximizes the probability of obtaining the latest version of a file or a version close to the latest version by a read client that connects to a number of storage nodes. / <p>Pages 153-168 are removed due to copyright reasons.</p><p>QC 20161012</p><p></p>
79

Massed and Distributed Practice in Beginning Gymnastics for College Women

Dixon, Carolyn 08 1900 (has links)
The study was undertaken to determine the effects of massed and distributed practice on the performance of beginning gymnastics skills, to secure data on these effects, and to evaluate these effects in acquiring the necessary components of motor fitness for basic gymnastics skills.
80

Fehleranalyse in Microservices mithilfe von verteiltem Tracing

Sinner, Robin Andreas 26 April 2022 (has links)
Mit dem Architekturkonzept von Microservices und der steigenden Anzahl an heterogenen Services, ergeben sich neue Herausforderungen hinsichtlich des Debuggings, Monitorings und Testens solcher Anwendungen. Verteiltes Tracing bietet einen Ansatz zur Lösung dieser Herausforderungen. Das Ziel in der vorliegenden Arbeit ist es zu untersuchen, wie verteiltes Tracing für eine automatisierte Fehleranalyse von Microservices genutzt werden kann. Dazu wird die folgende Forschungsfrage gestellt: Wie können Traces ausgewertet werden, um die Fehlerursachen beim Testen von Microservices zu identifizieren? Um die Forschungsfrage zu beantworten, wurde ein Datenformat zur automatisierten Auswertung von Tracing-Daten definiert. Zur Auswertung wurden Algorithmen konzipiert, welche die Fehlerpropagierung zwischen Services anhand kausaler Beziehungen auflösen. Dieses Vorgehen wurde in Form einer prototypischen Implementierung in Python umgesetzt und dessen Funktionalität evaluiert. Die Ergebnisse zeigen, dass in rund 77 % der durchgeführten Testszenarien, die Fehlerursache mithilfe des Prototyps korrekt aus den Tracing-Daten abgeleitet werden konnte. Ohne Einsatz des Prototyps und ohne weiteres Debugging konnte lediglich in circa 5 % der Testszenarien die Fehlerursache anhand der Fehlerausgabe der Anwendung selbst erkannt werden. Damit bietet das Konzept sowie der Prototyp eine Erleichterung des Debuggings von Pythonbasierten Microservice-Anwendungen.:1. Einleitung 1.1. Motivation 1.2. Abgrenzung 1.3. Methodik 2. Grundlagen 2.1. Verwandte Arbeiten 2.1.1. Automatisierte Analyse von Tracing-Informationen 2.1.2. Automatisierte Fehlerursachenanalyse 2.1.3. Fehlerursachenanalyse in Microservices 2.1.4. Ursachenanalyse von Laufzeitfehlern in verteilten Systemen 2.1.5. Tracing-Tool zur Fehlererkennung 2.2. Theoretische Grundlagen 2.2.1. Microservices 2.2.2. Verteiltes Tracing 2.2.3. OpenTracing 2.2.4. Jaeger 2.2.5. Exemplarische Anwendung für Untersuchungen 2.2.6. Continuous Integration/ Continuous Delivery/ Continuous Deployment 3. Konzeption 3.1. Definition des Datenformats 3.1.1. Analyse des Datenformats der OpenTracing Spezifikation 3.1.2. Erweiterungen 3.1.3. Resultierendes Datenformat für eine automatisierte Auswertung 3.1.4. Zeitversatz verteilter Systeme 3.2. Algorithmen zur Fehlerursachenanalyse 3.2.1. Erstellung eines Abhängigkeitsgraphen 3.2.2. Pfad-basierte Untersuchung von Fehlerursachen 3.2.3. Auswertung nach zeitlicher Abfolge und kausaler Beziehung 3.2.4. Bewertung potenzieller Fehlerursachen 3.3. Konzeption des Prototyps 3.3.1. Integration in den Entwicklungszyklus 3.3.2. Funktionale Anforderungen 3.3.3. Architektur des Prototyps 4. Durchführung/ Implementation 4.1. Implementation des Prototyps zur Fehlerursachenanalyse 4.2. Einbindung des Prototyps in Testszenarien und Continuous Integration 4.3. Tests zur Evaluation des Prototyps 5. Ergebnisse 51 5.1. Evaluation des Konzepts/ Prototyps 5.2. Evaluation der Methoden zur Fehlerursachen-Bewertung 5.3. Wirtschaftliche Betrachtung 6. Fazit/ Ausblick 6.1. Fazit 6.2. Ausblick Literatur Selbstständigkeitserklärung A. Abbildungen A.1. Mockups der Auswertungsberichte des Prototyps B. Tabellen B.1. Felder der Tags nach OpenTracing B.2. Felder der Logs nach OpenTracing B.3. Auswertung der Testergebnisse C. Listings C.1. Datenformat zur automatisierten Auswertung C.2. Definition von Regeln zur Auswertung D. Anleitung für den Prototyp

Page generated in 0.0181 seconds