• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 94
  • 32
  • 24
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 386
  • 386
  • 326
  • 316
  • 200
  • 107
  • 76
  • 67
  • 66
  • 65
  • 56
  • 45
  • 38
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

AN EVOLUTIONARY APPROACHTO A COMMUNICATIONS INFRASTRUCTURE FOR INTEGRATED VOICE, VIDEO AND HIGH SPEED DATA FROM RANGETO DESKTOP USING ATM

Smith, Quentin D. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / As technology progresses we are faced with ever increasing volumes and rates of raw and processed telemetry data along with digitized high resolution video and the less demanding areas of video conferencing, voice communications and general LAN-based data communications. The distribution of all this data has traditionally been accomplished by solutions designed to each particular data type. With the advent of Asynchronous Transfer Modes or ATM, a single technology now exists for providing an integrated solution to distributing these diverse data types. This allows an integrated set of switches, transmission equipment and fiber optics to provide multi-session connection speeds of 622 Megabits per second. ATM allows for the integration of many of the most widely used and emerging low, medium and high speed communications standards. These include SONET, FDDI, Broadband ISDN, Cell Relay, DS-3, Token Ring and Ethernet LANs. However, ATM is also very well suited to handle unique data formats and speeds, as is often the case with telemetry data. Additionally, ATM is the only data communications technology in recent times to be embraced by both the computer and telecommunications industries. Thus, ATM is a single solution for connectivity within a test center, across a test range, or between ranges. ATM can be implemented in an evolutionary manner as the needs develop. This means the rate of capital investment can be gradual and older technologies can be replaced slowly as they become the communications bottlenecks. However, success of this evolution requires some planning now. This paper provides an overview of ATM, its application to test ranges and telemetry distribution. A road map is laid out which can guide the evolutionary changeover from today's technologies to a full ATM communications infrastructure. Special applications such as the support of high performance multimedia workstations are presented.
302

REAL-TIME HIGH SPEED DATA COLLECTION SYSTEM WITH ADVANCED DATA LINKS

Tidball, John E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The purpose of this paper is to describe the development of a very high-speed instrumentation and digital data recording system. The system converts multiple asynchronous analog signals to digital data, forms the data into packets, transmits the packets across fiber-optic lines and routes the data packets to destinations such as high speed recorders, hard disks, Ethernet, and data processing. This system is capable of collecting approximately one hundred megabytes per second of filtered packetized data. The significant system features are its design methodology, system configuration, decoupled interfaces, data as packets, the use of RACEway data and VME control buses, distributed processing on mixedvendor PowerPCs, real-time resource management objects, and an extendible and flexible configuration.
303

The Role of Intelligent Mobile Agents in Network Management and Routing

Balamuru, Vinay Gopal 12 1900 (has links)
In this research, the application of intelligent mobile agents to the management of distributed network environments is investigated. Intelligent mobile agents are programs which can move about network systems in a deterministic manner in carrying their execution state. These agents can be considered an application of distributed artificial intelligence where the (usually small) agent code is moved to the data and executed locally. The mobile agent paradigm offers potential advantages over many conventional mechanisms which move (often large) data to the code, thereby wasting available network bandwidth. The performance of agents in network routing and knowledge acquisition has been investigated and simulated. A working mobile agent system has also been designed and implemented in JDK 1.2.
304

Webový vyhledávací systém / Web Search Engine

Tamáš, Miroslav January 2014 (has links)
Academic fulltext search engine Egothor has recently became starting point of several thesis aimed on searching. Until now, there was no solution available to provide robust set of web content processing tools. This master thesis is aiming on design and implementation of distributed search system working primary with internet sources. We analyze first generation components for processing of web content and summarize their primary features. We use those features to propose architecture of distributed web search engine. We aim mainly to phases of data fetching, processing and indexing. We also describe final implementation of such system and propose few ideas for future extensions.
305

Sistemas de sensoriamento espectral cooperativos. / Cooperative spectrum sensing systems.

Paula, Amanda Souza de 28 April 2014 (has links)
Esta tese de doutorado trata de algoritmos de detecção cooperativa aplicados ao problema de sensoriamento espectral em sistemas de rádios cognitivos. O problema de detecção cooperativa é abordado sob dois paradigmas distintos: detecção centralizada e distribuída. No primeiro caso, considera-se que o sistema conta com um centro de fusão responsável pela tomada de decisão no processo de detecção. Já no segundo caso, considera-se que os rádios cognitivos da rede trocam informações entre si e as decisões são tomadas localmente. No que concerne ao sensoriamento espectral centralizado, são estudados os casos em que os rádios cognitivos enviam apenas um bit de decisão para o centro de fusão (decisão do tipo hard) e também o caso em que o detector envia a própria estatística de teste ao centro de fusão (decisão do tipo soft). No âmbito de sensoriamento espectral cooperativo com detecção distribuída, são tratados três cenários diferentes. No primeiro, considera-se o caso em que os rádios cognitivos têm conhecimento a priori do sinal enviado pelo usuário primário do sistema e do canal entre eles e o usuário primário. No segundo caso, há conhecimento apenas do sinal enviado pelo usuário primário. Já no terceiro, os rádios cognitivos não dispõem de qualquer informação a priori do sinal enviado pelo usuário primário. Além do problema de detecção distribuída, a tese também apresenta um capítulo dedicado ao problema de estimação, diretamente associado ao de detecção. Esse último problema é abordado utilizando algoritmos derivados da teoria clássica de filtragem adaptativa. / This doctorate thesis deals with cooperative detection algorithms applied to the spectral sensing problem. The cooperative detection problem is approached under two different paradigms: centralized and distributed detection. In the first case, is considered that a fusion center responsible for detection decision is presented in the system. On the other hand, in the second case, is considered that the cognitive radios in the network exchange information among them. Concerning the centralized spectrum sensing system, the case in which the cognitive radios send only one decision bit (hard decision) to the fusion center and the case in which the detector send the statistic test (soft decision) are considered. Regarding the spectrum sensing system with distributed detection, the work analysis three different scenarios. In the first one, where the cognitive radios explore an a priori knowledge of the primary user signal and the channel between the primary user and the cognitive radio. In the second one, the cognitive radios use an a priori knowledge of only the primary user signal. And, in the las scenario, there is no a priori knowledge about the primary user signal. Besides the distributed detection problem, the thesis also presents a chapter dedicated to the estimation problem, which is directed related to the detection problem. This last issue is approached using adaptive algorithms derived from the classic adaptive filtering theory.
306

Deterministic Object Management in Large Distributed Systems

Mikhailov, Mikhail 05 March 2003 (has links)
Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows.
307

Probing for a Continual Validation Prototype

Gill, Peter W. 26 August 2001 (has links)
"Continual Validation of distributed software systems can facilitate their development and evolution and engender user trust. We present a monitoring architecture that is being developed collaboratively under DARPA's Dynamic Assembly for System Adaptability, Dependability, and Assurance program. The monitoring system includes a probing infrastructure that is injected into or wrapped around a target software system. Probes deliver events of interest to be processed by a monitoring infrastructure that consists of gauges for delivering information to system administrators. This thesis presents a classification of existing probing technologies and contains a full implementation of a probing infrastructure in Java."
308

Sistemas de sensoriamento espectral cooperativos. / Cooperative spectrum sensing systems.

Amanda Souza de Paula 28 April 2014 (has links)
Esta tese de doutorado trata de algoritmos de detecção cooperativa aplicados ao problema de sensoriamento espectral em sistemas de rádios cognitivos. O problema de detecção cooperativa é abordado sob dois paradigmas distintos: detecção centralizada e distribuída. No primeiro caso, considera-se que o sistema conta com um centro de fusão responsável pela tomada de decisão no processo de detecção. Já no segundo caso, considera-se que os rádios cognitivos da rede trocam informações entre si e as decisões são tomadas localmente. No que concerne ao sensoriamento espectral centralizado, são estudados os casos em que os rádios cognitivos enviam apenas um bit de decisão para o centro de fusão (decisão do tipo hard) e também o caso em que o detector envia a própria estatística de teste ao centro de fusão (decisão do tipo soft). No âmbito de sensoriamento espectral cooperativo com detecção distribuída, são tratados três cenários diferentes. No primeiro, considera-se o caso em que os rádios cognitivos têm conhecimento a priori do sinal enviado pelo usuário primário do sistema e do canal entre eles e o usuário primário. No segundo caso, há conhecimento apenas do sinal enviado pelo usuário primário. Já no terceiro, os rádios cognitivos não dispõem de qualquer informação a priori do sinal enviado pelo usuário primário. Além do problema de detecção distribuída, a tese também apresenta um capítulo dedicado ao problema de estimação, diretamente associado ao de detecção. Esse último problema é abordado utilizando algoritmos derivados da teoria clássica de filtragem adaptativa. / This doctorate thesis deals with cooperative detection algorithms applied to the spectral sensing problem. The cooperative detection problem is approached under two different paradigms: centralized and distributed detection. In the first case, is considered that a fusion center responsible for detection decision is presented in the system. On the other hand, in the second case, is considered that the cognitive radios in the network exchange information among them. Concerning the centralized spectrum sensing system, the case in which the cognitive radios send only one decision bit (hard decision) to the fusion center and the case in which the detector send the statistic test (soft decision) are considered. Regarding the spectrum sensing system with distributed detection, the work analysis three different scenarios. In the first one, where the cognitive radios explore an a priori knowledge of the primary user signal and the channel between the primary user and the cognitive radio. In the second one, the cognitive radios use an a priori knowledge of only the primary user signal. And, in the las scenario, there is no a priori knowledge about the primary user signal. Besides the distributed detection problem, the thesis also presents a chapter dedicated to the estimation problem, which is directed related to the detection problem. This last issue is approached using adaptive algorithms derived from the classic adaptive filtering theory.
309

An empirical investigation of SSDL

Fornasier, Patric, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
The SOAP Service Description Language (SSDL) is a SOAP-centric language for describing Web Service contracts. SSDL focuses on message abstraction as the building block for creating service-oriented applications and provides an extensible range of protocol frameworks that can be used to describe and formally model Web Service interactions. SSDL's natural alignment with service-oriented design principles intuitively suggests that it encourages the creation of applications that adhere to this architectural paradigm. Given the lack of tools and empirical data for using SSDL as part of Web Services-based SOAs, we identified the need to investigate its practicability and usefulness through empirical work. To that end we have developed Soya, a programming model and runtime environment for creating and executing SSDL-based Web Services. On the one hand, Soya provides straightforward programming abstractions that foster message-oriented thinking. On the other hand, it leverages contemporary tooling (i.e. Windows Communication Foundation) with SSDL-related runtime functionality and semantics. In this thesis, we describe the design and architecture of Soya and show how it makes it possible to use SSDL as an alternative and powerful metadata language without imposing unrealistic burdens on application developers. In addition, we use Soya and SSDL in a case study which provides a set of initial empirical results with respect to SSDL's strengths and drawbacks. In summary, our work serves as a knowledge framework for better understanding message-oriented Web Service development and demonstrates SSDL's practicability in terms of implementation and usability.
310

Eidolon: adapting distributed applications to their environment.

Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.

Page generated in 0.1053 seconds