• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 313
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 797
  • 797
  • 267
  • 219
  • 149
  • 145
  • 113
  • 97
  • 86
  • 79
  • 78
  • 75
  • 72
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

A Message Oriented Middleware Library

Kuhlman, Christopher James 01 January 2007 (has links)
A message oriented middleware inter-process communication library called Nora has been designed, constructed, and validated. The library is written in C++. The middleware is designed to bridge two of the main messaging standards, the Message Passing Interface (MPI) and the Data Distribution Service (DDS), by enabling communications for (1) computationally intensive distributed systems that typically follow a master-slave design and (2) general data distribution. The design is original and does not borrow from either specification. The library can be statically linked to application code so that the library is part of each application in a distributed system. The implementation for master-slave messaging has not yet been completed, but the great majority of the work is done; the general data distribution model has been fully implemented. The design is critically evaluated.A key aspect of the library is configurability. Various characteristics of the messaging library, such as the number of message producer and consumer threads, the message types serviced by each thread, the types of communication mechanisms, and others are specified through a configuration file. Consequently, the library has only to be built once for all applications in a distributed system and communications for each application are tailored through a unique configuration file. The library application programmer interface (API) is structured so that communications details can be isolated from the application code and therefore applications are not affected by changes to the IPC configuration.Beyond its use for the two classifications of problems listed above, it is also suited for use by system architects that are investigating resource requirements and designs for new systems because applications can be reconfigured quickly for different communications behavior on different platforms through the configuration file. Thus, it is useful for prototyping and performance evaluation.
152

Dynamické rekonfigurace v komponentovém systému SOFA2 / Dynamic reconfiguration in SOFA 2 component system

Babka, David January 2011 (has links)
SOFA 2 is a component system employing hierarchically composed components in distributed environment. It contains concepts, which allow for specifying dynamic reconfigurations of component architectures at runtime, which is essential for virtually any real-life application. The dynamic reconfigurations comprise creating/disposing components and creating/disposing connections between components. In contrast to majority of component systems, SOFA 2 is able to specify possible architectural reconfigurations in the application architecture at design time. This allows SOFA 2 runtime to follow the dynamic behavior of the application and reflect the behavior in architectural reconfigurations. The goal of this thesis is to reify these concepts of dynamic reconfigurations in the implementation of SOFA 2 and demonstrate their usage on a demo application.
153

SOFAnet 2 / SOFAnet 2

Papež, Michal January 2011 (has links)
SOFAnet 2 MASTER THESIS Michal Papež Department of Distributed and Dependable Systems, 2011 Abstract: The aim of SOFAnet 2, as a network environment of the SOFA 2 com- ponent system, is to exchange components between SOFAnodes in a simple and rational way. Current concerns of the SOFA 2 users about software distribution are analyzed and discussed. New high level concepts of Applications and Components are defined together with their mapping to SOFA 2 first class concepts, means of distribution and removal. Furthermore a methodology to keep SOFA 2 repository clean is introduced. All new elements as concepts and operations are studied using a formal set model. The proposed concept of SOFAnet 2 is proved by a prototype implementation. 1
154

Implementability of distributed systems described with scenarios / Implémentabilité de systèmes distribués décrits à l'aide de scénarios

Abdallah, Rouwaida 16 July 2013 (has links)
Les systèmes distribués sont au cœur de nombreuses applications modernes (réseaux sociaux, services web, etc.). Cependant, les développeurs sont confrontés à de nombreux défis dans l’implémentation des systèmes distribués, notamment les comportements erronés à éviter et qui sont causées par la concurrence entre les entités de ce système. La génération automatique de code à partir des exigences des systèmes distribués reste un vieux rêve. Dans cette thèse, nous considérons la génération automatique d'un squelette de code portant sur les interactions entre les différentes entités d'un système distribué. Cela nous permet d'éviter les comportements erronés causés par la concurrence. Ensuite, ce squelette peut être complété par l'ajout et le débogage du code qui décrit les actions locales qui se passent sur chaque entité indépendamment de ses interactions avec les autres entités. / Distributed systems lie at the heart of many modern applications (social networks, web services, etc.). However, developers face many challenges in implementing distributed systems. The major one we focus on is avoiding the erroneous behaviors, that do not appear in the requirements of the distributed system, and that are caused by the concurrency between the entities of this system. The automatic code generation from requirements of distributed systems remains an old dream. In this thesis, we consider the automatic generation of a skeleton of code covering the interactions between the entities of a distributed system. This allows us to avoid the erroneous behaviors caused by the concurrency. Then, in a later step, this skeleton can be completed by adding and debugging the code that describes the local actions happening on each entity independently from its interactions with the other entities. The automatic generation that we consider is from a scenario-based specification that formally describes the interactions within informal requirements of a distributed system. We choose High-level Message Sequence Charts (HMSCs for short) as a scenario-based specification for the many advantages that they present: namely the clear graphical and textual representations, and the formal semantics. The code generation from HMSCs requires an intermediate step, called “Synthesis” which is their transformation into an abstract machine model that describes the local views of the interactions by each entity (A machine representing an entity defines sequences of messages sending and reception). Then, from the abstract machine model, the skeleton’s code generation becomes an easy task. A very intuitive abstract machine model for the synthesis of HMSCs is the Communicating Finite State Machine (CFSMs). However, the synthesis from HMSCs into CFSMs may produce programs with more behaviors than described in the specifications in general. We thus restrict then our specifications to a sub-class of HMSCs named "local HMSC". We show that for any local HMSC, behaviors can be preserved by addition of communication controllers that intercept messages to add stamping information before resending them. We then propose a new technique that we named "localization" to transform an arbitrary HMSC specification into a local HMSC, hence allowing correct synthesis. We show that this transformation can be automated as a constraint optimization problem. The impact of modifications brought to the original specification can be minimized with respect to a cost function. Finally, we have implemented the synthesis and the localization approaches into an existing tool named SOFAT. We have, in addition, implemented to SOFAT the automatic code generation of a Promela code and a JAVA code for REST based web services from HMSCs.
155

Models and algorithms for cyber-physical systems

Gujrati, Sumeet January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Gurdip Singh / In this dissertation, we propose a cyber-physical system model, and based on this model, present algorithms for a set of distributed computing problems. Our model specifies a cyber-physical system as a combination of cyber-infrastructure, physical-infrastructure, and user behavior specification. The cyber-infrastructure is superimposed on the physical-infrastructure and continuously monitors its (physical-infrastructure's) changing state. Users operate in the physical-infrastructure and interact with the cyber-infrastructure using hand-held devices and sensors; and their behavior is specified in terms of actions they can perform (e.g., move, observe). While in traditional distributed systems, users interact solely via the underlying cyber-infrastructure, users in a cyber-physical system may interact directly with one another, access sensor data directly, and perform actions asynchronously with respect to the underlying cyber-infrastructure. These additional types of interactions have an impact on how distributed algorithms for cyber-physical systems are designed. We augment distributed mutual exclusion and predicate detection algorithms so that they can accommodate user behavior, interactions among them and the physical-infrastructure. The new algorithms have two components - one describing the behavior of the users in the physical-infrastructure and the other describing the algorithms in the cyber-infrastructure. Each combination of users' behavior and an algorithm in the cyber-infrastructure yields a different cyber-physical system algorithm. We have performed extensive simulation study of our algorithms using OMNeT++ simulation engine and Uppaal model checker. We also propose Cyber-Physical System Modeling Language (CPSML) to specify cyber-physical systems, and a centralized global state recording algorithm.
156

Android application for file storage and retrieval over secured and distributed file servers

Kukkadapu, Sowmya January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel A. Andresen / Recently, the world has been trending toward the use of Smartphone. Today, almost each and every individual is using Smartphone for various purposes benefited by the availability of large number of applications. The memory on the SD (Secure Digital) memory card is going to be a constraint for the usage of the Smartphone for the user. Memory is used for storing large amounts of data, which include various files and important document. Besides having many applications to fill the free space, we hardly have an application that manages the free memory according to the user’s choice. In order to manage the free space on the SD card, we need an application to be developed. All the important files stored on the Android device cannot be retrieved if we lose the Android device. Targeting the problem of handling the memory issues, we developed an application that can be used to provide the security to the important documents and also store unnecessary files on the distributed file servers and retrieve them back on request.
157

[en] SOFTWARE COMPONENTS WITH SUPPORT FOR DATA STREAMS / [pt] COMPONENTES DE SOFTWARE COM SUPORTE A FLUXO DE DADOS

VICTOR SA FREIRE FUSCO 18 January 2013 (has links)
[pt] O desenvolvimento baseado em componentes de um tópico que tem atrasado bastante atençco nos últimos anos. Esta técnica permite a construção de sistemas de software complexos de forma rápida e estruturada. Diversos modelos de componentes já foram propostos pela indústria e pela academia. Dentro destes, aqueles que oferecem suporte da comunicação distribuída geralmente interagem através de Chamadas Remotas de Procedimentos. Dos modelos pesquisados, apenas o CORBA Component Model possui uma especificação em andamento para o suporte da comunicação através de fluxos de dados. Esse suporte se mostra de grande relevância em sistemas que precisam lidar com dados de sensores e com transmissão de áudio e vídeo. O objetivo principal deste trabalho de apresentar uma arquitetura que permita a implementação de aplicações com suporte ao fluxo de dados no middleware Software Component System (SCS). Para tal, o modelo de componentes do SCS foi estendido para oferecer portas de fluxos de dados. Como avaliação, este trabalho apresenta alguns resultados experimentais de desempenho e escalabilidade, assim como uma aplicação que exercita as necessidades do executor de fluxos de algoritmos do CSBase, um framework utilizado no desenvolvimento de sistemas para computação em grade. / [en] Component-based software development is a topic that has attracted attention in recent years. This technique allows the construction of complex software systems in a quick and structured way. Several component models have been proposed by the industry and the academy. The majority of these component models adopt Remote Procedure Calls as their basic communication mechanism. The CORBA Component Model is the only one from the surveyed models that has a work in progress to support communication over data streams. This support proves to be of great importance in systems that must deal with data from sensors and systems that deal with audio and video transmission. The main goal of this work is to propose an architecture that enables the middleware Software Component System (SCS) to support applications that require data streaming. To this end, the SCS component model was extended to support stream ports. As evaluation, this work presents some experimental results of performance and scalability, as well as an application that exercises the needs of the CSBase s algorithm ow executor, a framework used to build systems for grid computing.
158

Laboratório remoto baseado em software livre para realização de experimentos didáticos. / Remote laboratory based on open source software to perform educational experiments.

Ogiboski, Luciano 15 June 2007 (has links)
Este trabalho apresenta o desenvolvimento de um sistema de aquisição de dados para controlar experimentos em instrumentos de medidas através da interface GPIB. O sistema criado tem objetivos educacionais e foi integrado a um ambiente de educação a distância, permitindo o acesso remoto a instrumentos reais para que possam ser utilizados em cursos on-line. Foi utilizado um sistema de gerenciamento de cursos on-line com ferramentas interativas e de fácil gerenciamento. O sistema escolhido permite a criação de cursos de forma modular, onde os componentes ou recursos de interação são escolhidos individualmente para cada ambiente novo criado. O objetivo deste trabalho foi a criação de um novo módulo para o sistema, que representa um laboratório remoto para realização de experimentos de aquisição de dados em instrumentos. Foi proposta uma arquitetura modular para laboratório remoto baseado em tecnologias de software livre, juntamente com a tecnologia de Web Services para integração entre o sistema de aquisição e o ambiente de educação a distância. Este trabalho oferece uma nova abordagem para instrumentação remota, fornecendo não apenas a extensão de um laboratório através da Internet e de sistemas distribuídos, mas também ferramentas interativas de educação a distância, favorecendo a interação e a comunicação entre usuários. / This work presents the development of a data acquisition system to control experiments in measurement instruments through GPIB interface. The system created is intended to be applied for educational purposes, thus it was integrated to an online learning environment, enabling remote access to real instruments, to be used in e-learning courses. It was used an open source environment with interactive tools and easy management. Chose system allows modular e-learning courses creation, which learning components or interactive resources can be selected independently. The aim was to create a new module, representing a remote laboratory, to perform data acquisition experiments in instruments. It was proposed a modular architecture to remote laboratory based on open source technologies. It includes Web Services architecture to integrate data acquisition system and distance education environment. This research presents a differential approach for remote instrumentation. It represents not only an internet extension for laboratory, but also offer distance education interactive resources to improve user communication.
159

Adaptação de vídeo através de redes de serviços sobrepostos. / Video adaptation through overlay services networks.

Kopp, Samuel 29 October 2010 (has links)
A adaptação de vídeo é uma técnica amplamente explorada na provisão de conteúdos de forma que atendam adequadamente os mais diversos cenários de consumo, caracterizados por diferentes requisitos e restrições de rede, terminal e preferências do usuário. Entretanto, sua aplicação em sistemas de distribuição de vídeo de alta demanda, como CDNs, é abordada de forma simplista pelas propostas existentes, pois não consideram os diversos aspectos de otimização do uso da rede. Este trabalho trata estas deficiências propondo um serviço de adaptação de vídeo que explora o conceito de contexto, ao elaborar uma adaptação baseada em perfis dos usuários. Além disso, a proposta de adaptação está totalmente integrada a distribuição por redes sobrepostas, sendo possível associar a adaptação em tempo-real à transmissão por multicast e ao caching, o que garante a otimização do uso dos recursos de rede na distribuição dos fluxos de vídeo. Como forma de demonstrar a viabilidade e os benefícios desta proposta são realizados testes experimentais através de uma implementação de referência deste serviço. / Video adaptation is a technique extensively explored for providing content so that it meets different requirements and constraints of network, users terminal and preferences, providing better quality of experience. However, its use in high-demand video distribution systems is not sufficiently explored and is usually tackled in a simple way. This is due to the fact that current solutions do not deals with its applicability systemically. This work presents an approach, which explores video adaptation and the concept of context, elaborating a video adaptation service based in users profile. Moreover, the adaptation proposal presented here is totally integrated to CND (content network distribution) using overlay networks, being possible the association with real-time adaptation to the transmission using multicast and caching. This work focuses on presenting the architecture of that service and its operation. The proposal validation is performed by putting the identified requirements for its application against its provided functionalities and the test results in a implementation of this service.
160

Um sistema de arquivos com blocos distribuídos e acesso a informações de localidade de dados. / A file system with distributed blocks and access to data locality information.

Sugawara Júnior, Ricardo Ryoiti 30 April 2014 (has links)
Muitos sistemas recentes de processamento paralelo e distribuído trabalham com acesso intensivo a dados, combinando facilidades de computação e armazenamento de dados para a construção de sistemas de grande escala, com custos reduzidos. Nesses sistemas, a interligação de um grande número de nós resulta na divisão sucessiva da capacidade de transferência, tornando a movimentação de dados um importante fator de limitação de desempenho. Ao se escalonar as tarefas computacionais em nós próximos dos dados, melhorias significativas no desempenho podem ser obtidas. Entretanto, a informação sobre a localidade dos dados não está facilmente acessível para o programador. Seu uso requer a interação com rotinas internas dos sistemas de arquivos, ou exige a adoção de um modelo de programação específico, normalmente associado a uma plataforma de execução já preparada para escalonar tarefas com aproveitamento da localidade dos dados. Este trabalho desenvolve a proposta de um mecanismo e interface para prover acesso a informações de localidade, além de permitir o controle da distribuição de novos dados. As operações de consulta e controle são realizadas por meio de arquivos e diretórios especiais, adicionados de forma transparente a um sistema de arquivos com blocos de dados distribuídos, apropriado para a execução em ambientes de processamento paralelo. O sistema é denominado parfs e permite obter as informações de localidade com operações de leitura e escrita em arquivos comuns, sem a necessidade do uso de bibliotecas ou modelos de programação específicos. Testes foram realizados para avaliar a proposta. Utilizando o escalonamento seletivo de operações de acesso a dados, baseado na informação de localidade, foram obtidos ganhos significativos de desempenho nessas operações. / Many recent data intensive parallel processing systems are built with cost effective hardware and combine compute and storage facilities. In such systems, interconnecting large numbers of nodes results in bandwidth-bisecting networks, making data movement an important performance limiting factor. By distributing jobs near data, significant performance improvements can be achieved. However, data locality information is not easily available to the programmer. It requires interaction with file system internals, or the adoption of custom programming and run-time frameworks providing embedded locality-aware job scheduling. This document develops a proposal of a mechanism and an interface to provide locality information and new data placement control. The query and control tasks are performed through special file and directories transparently added to a distributed file system, suitable for parallel processing environments. The file system is called parfs and allows the use of locality by read and write operations over regular files, with no need of libraries or specific programming models. Tests were conducted to assess the feasibility of the proposal. Through selective scheduling of data operations, based on locality information, significant performance gains were obtained in such operations.

Page generated in 0.0498 seconds