201 |
Modèles, méthodes et outils pour les systèmes répartis multiéchelles / Models, methods and tools for multiscale distributed systemsRottenberg, Sam 27 April 2015 (has links)
Les systèmes informatiques sont des systèmes de plus en plus complexes, répartis sur plusieurs niveaux d’infrastructures des Technologies de l’Information et de la Communication (TIC). Ces systèmes sont parfois appelés des systèmes répartis multiéchelles. Le terme « multiéchelle » peut qualifier des systèmes répartis extrêmement variés suivant les points de vue dans lesquels ils sont caractérisés, comme la dispersion géographique des entités, la nature des équipements qui les hébergent, les réseaux sur lesquels elles sont déployées, ou encore l’organisation des utilisateurs. Pour une entité d’un système multiéchelle, les technologies de communication, les propriétés non fonctionnelles (en termes de persistance ou de sécurité), ou les architectures à favoriser, varient suivant la caractérisation multiéchelle pertinente définie ainsi que l’échelle à laquelle est associée l’entité. De plus, des architectures ad hoc de tels systèmes complexes sont coûteuses et peu durables. Dans cette thèse, nous proposons un framework de caractérisation multiéchelle, appelé MuSCa. Ce framework inclut un processus de caractérisation fondé sur les concepts de points de vue, dimensions et échelles, permettant de mettre en avant, pour chaque système complexe étudié, ses caractéristiques multiéchelles. Ces concepts constituent le cœur d’un métamodèle dédié. Le framework que nous proposons permet aux concepteurs de systèmes répartis multiéchelles de partager une taxonomie pour qualifier chaque système. Le résultat d’une caractérisation est un modèle à partir duquel le framework produit des artefacts logiciels qui apportent, à l’exécution, la conscience des échelles aux entités du système / Computer systems are becoming more and more complex. Most of them are distributed over several levels of Information and Communication Technology (ICT) infrastructures. These systems are sometimes referred to as multiscale systems. The word “multiscale” may qualify extremely various distributed systems according to the viewpoints in which they are characterized, such as the geographic dispersion of the entities, the nature of the hosting devices, the networks they are deployed on, or the users’ organization. For one entity of a multiscale system, communication technologies, non-functional properties (in terms of persistence or security) or architectures to be favored may vary depending on the relevant multiscale characterization defined for the system and on the scale associated to the entity. Moreover, ad hoc architectures of such complex systems are costly and non-sustainable. In this doctoral thesis, we propose a multiscale characterization framework, called MuSCa. The framework includes a characterization process based on the concepts of viewpoints, dimensions and scales, which enables to put to the fore the multiscale characteristics of each studied system. These concepts constitute the core of a dedicated metamodel. The proposed framework allows multiscale distributed systems designers to share a taxonomy for qualifying each system. The result of a characterization is a model from which the framework produces software artifacts that provide scale-awareness to the system’s entities at runtime
|
202 |
Modelagem de um componente adaptativo para o gerenciamento dos recursos de acessibilidade de um sistema computacional de uso geral. / Modeling of an adaptive component to manage the accessibility resources of a general use system.Carhuanina, Rosalia Edith Caya 08 December 2015 (has links)
Neste trabalho, é apresentada a modelagem de um componente que, através do uso de técnicas de tecnologia adaptativa, permite a reconfiguração da interface de usuário de um sistema legado de uso geral. A mencionada reconfiguração foca-se no gerenciamento adequado dos recursos do sistema para fornecer acessibilidade aos usuários com necessidades especiais no contexto corrente. Nossa proposta procura responder à necessidade de colocar ao alcance das pessoas com deficiência os recursos tecnológicos presentes na sociedade da informação, assim como seus benefícios associados. Assim, o objetivo é diminuir a barreira criada pelo desenvolvimento de sistemas computacionais sob a conceição tradicional de um perfil de usuário padrão, entre as tecnologias e os usuários com necessidades especiais. Ao respeito desse assunto, as propostas encontradas na literatura apresentam três abordagens: uma abordagem reativa ou tecnologias assistivas, uma abordagem proativa ou tecnologias inclusivas e uma abordagem dentro do marco legal. No entanto, no contexto de sistemas legados já imersos na sociedade, existe um problema em aberto. Nesse caso não é viável, logística e economicamente, a aplicação da abordagem reativa, já que significaria adicionar sistemas com tecnologias assistivas para as comunidades específicas, por exemplo: cegos, surdos, deficiência física, entre outros. Da mesma maneira, não é possível implementar a abordagem proativa pois ela só é aplicável para tecnologias em processo de desenvolvimento, e no nosso caso trata-se de sistema já em uso. Nossa proposta é a modelagem de um componente que através de técnicas de tecnologia adaptativa possa assistir na reconfiguração dos recursos próprios do sistema em questão levando em conta tanto as informações de contexto corrente da interação (contexto do usuário, contexto do sistema, contexto do ambiente de execução) quanto as informações históricas da sua execução. Para atingir o nosso objetivo é especificada uma meta-arquitetura inspirada na programação orientada a componentes que permite flexibilidade, baixo acoplamento e mantém a integridade original do sistema. Finalmente é realizada uma prova de conceito que permite confirmar a viabilidade técnica do modelo proposto. / This work is presents the modeling of a component that, by Adaptive Technology\'s techniques, allows the reconfiguration of the user interface inside an inherited general use system. The mentioned reconfiguration focus in the adequate management of system resources in order to provide accessibility for users with special needs at real time. Our propose intent to answer the need for putting at reach of people with disabilities the technological resources existing in the information society, as well as it corresponding benefits. Therefore, the objective is to decrement the barrier created by the developing of computational systems under the traditional conception of the \"standard user\" profile, between the technologies and the users with disabilities. About this matter, previous proposals found in literature classify them within three different approaches: a reactive approach or so called assistive technologies, a proactive approach or inclusive technologies, and a legal approach related to building an international legal framework. Nevertheless, in the context of inherited systems, which are already immerse in our society, a problem has kept underlying. In these cases, it is not affordable, from logistics as well as economics, the application of the reactive approach, because it will imply to add new versions of systems that implement assistive technologies at every spot and for every kind of specific community, for example: blind, deaf, motor impairment, and so on. In the same way, it is not possible apply the proactive approach because it can be only put into practice during the first phases of the development process of a software, and our case of interest is with already in use systems. Our propose to help solving this problem is the modeling of a component which through the incorporation of Adaptive Technology\'s techniques can assist in the reconfiguration of the own resources inside an inherited system taking into account the current context information(user context, system context, and environmental context) as well as the historical information gather from previous executions. To achieve this goal a meta-architecture is specified taking the component oriented programming paradigm as inspiration to provide flexibility, loose coupling and keep the integrity of the original system. Finally, a concept test is implemented to expose the viability of our propose from the technological perspective.
|
203 |
Programming Idioms and Runtime Mechanisms for Distributed Pervasive ComputingAdhikari, Sameer 13 October 2004 (has links)
The emergence of pervasive computing power and networking infrastructure is enabling new applications. Still, many milestones need to be reached before pervasive computing becomes an
integral part of our lives. An important missing piece is the middleware that allows developers to easily create interesting pervasive computing applications.
This dissertation explores the middleware needs of distributed pervasive applications. The main contributions of this thesis are the design, implementation, and evaluation of two systems: D-Stampede and Crest. D-Stampede allows pervasive applications to access live stream data from multiple sources using time as an index. Crest allows applications to organize historical events, and to reason about them using time, location, and identity. Together they meet the important needs of pervasive computing applications.
D-Stampede supports a computational model called the thread-channel graph. The threads map to computing devices ranging from small to high-end processing elements. Channels serve as the conduits among the threads, specifically tuned to handle time-sequenced streaming data. D-Stampede allows the dynamic creation of threads and channels, and for the dynamic establishment (and removal) of the plumbing among them.
The Crest system assumes a universe that consists of participation servers and event stores, supporting a set of applications. Each application consists of distributed software entities working together. The participation server helps the application entities to discover each other for interaction purposes. Application entities can generate events, store them at an event store, and correlate events. The entities can communicate with one another directly, or indirectly through the event store.
We have qualitatively and quantitatively evaluated D-Stampede and Crest. The qualitative aspect refers to the ease of programming afforded by our programming abstractions for pervasive applications. The quantitative aspect measures the cost of the API calls, and the performance
of an application pipeline that uses the systems.
|
204 |
Dynamic Differential Data Protection for High-Performance and Pervasive ApplicationsWidener, Patrick M. (Patrick McCall) 20 July 2005 (has links)
Modern distributed applications are long-lived, are expected to
provide flexible and adaptive data services, and must meet the
functionality and scalability challenges posed by dynamically changing
user communities in heterogeneous execution environments. The
practical implications of these requirements are that reconfiguration
and upgrades are increasingly necessary, but opportunities to perform
such tasks offline are greatly reduced. Developers are responding to
this situation by dynamically extending or adjusting application
functionality and by tuning application performance, a typical method
being the incorporation of client- or context-specific code into
applications' execution loops.
Our work addresses a basic roadblock in deploying such solutions: the protection of key
application components and sensitive data in distributed applications.
Our approach, termed Dynamic Differential Data Protection (D3P),
provides fine-grain methods for providing component-based protection
in distributed applications. Context-sensitive, application-specific
security methods are deployed at runtime to enforce restrictions in
data access and manipulation. D3P is suitable for low- or
zero-downtime environments, since deployments are performed while
applications run. D3P is appropriate for high performance environments
and for highly scalable applications like publish/subscribe, because
it creates native codes via dynamic binary code generation. Finally,
due to its integration into middleware, D3P can run across a wide
variety of operating system and machine platforms.
This dissertation introduces D3P, using sample
applications from the high performance and pervasive computing domains
to illustrate the problems addressed by our D3P solution. It also
describes how D3P can be integrated into modern middleware. We
present experimental evaluations which demonstrate the fine-grain
nature of D3P, that is, its ability to capture individual end users'
or components' needs for data protection, and also describe the
performance implications of using D3P in data-intensive applications.
|
205 |
Characterizing Middleware Mechanisms for Future Sensor NetworksWolenetz, Matthew David 20 July 2005 (has links)
Due to their promise for supporting applications society cares about and their unique blend of distributed systems and networking issues, wireless sensor networks (SN) have become an active research area. Most current SN use an arrangement of nodes with limited capabilities. Given SN device technology trends, we believe future SN nodes will have the computational capability of today's handhelds, and communication capabilities well beyond today's 'motes'. Applications will demand these increased capabilities in SN for performing computations in-network on higher bit-rate streaming data. We focus on interesting fusion applications such as automated surveillance. These applications combine one or more input streams via synthesis, or fusion, operations in a hierarchical fashion to produce high-level inference output streams.
For SN to successfully support fusion applications, they will need to be constructed to achieve application throughput and latency requirements while minimizing energy usage to increase application lifetime. This thesis investigates novel middleware mechanisms for improving application lifetime while achieving required latency and throughput, in the context of a variety of SN topologies and scales, models of potential fusion applications, and device radio and CPU capabilities.
We present a novel architecture, DFuse, for supporting data fusion applications in SN. Using a DFuse implementation and a novel simulator, MSSN, of the DFuse middleware, we investigate several middleware mechanisms for managing energy in SN. We demonstrate reasonable overhead for our prototype DFuse implementation on a small iPAQ SN. We propose and evaluate extensively an elegant distributed, local role-assignment heuristic that dynamically adapts the mapping of a fusion application to the SN, guided by a cost function. Using several studies with DFuse and MSSN, we show that this heuristic scales well and enables significant lifetime extension. We propose and evaluate with MSSN a predictive CPU scaling mechanism for dynamically optimizing energy usage by processors performing fusion. The scaling heuristic seeks to make the ratio of processing time to communication time for each synthesis operation conform to an input parameter. We show how tuning this parameter trades latency degradation for improved lifetime. These investigations demonstrate MSSN's utility for exposing tradeoffs fundamental to successful SN construction.
|
206 |
Proceedings of the 9th Workshop on Aspects, Components, and Patterns for Infrastructure Software (ACP4IS '10)January 2010 (has links)
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize.
Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns.
Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
|
207 |
PrefaceJanuary 2010 (has links)
Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques for improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize.
Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns.
Building on the ACP4IS meetings at AOSD 2002-2009, this workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connections between the software engineering and systems communities.
|
208 |
Midgard: um middleware baseado em componentes e orientado a recursos para redes de sensores sem fio / Midgard: um middleware baseado em componentes e orientado a recursos para redes de sensores sem fioAra?jo, Rodrigo Pinheiro Marques de 18 February 2011 (has links)
Made available in DSpace on 2014-12-17T15:47:55Z (GMT). No. of bitstreams: 1
RodrigoPMA_DISSERT.pdf: 1860763 bytes, checksum: 380e4ec05d43fc5ef9f86cc19b22618b (MD5)
Previous issue date: 2011-02-18 / On the last years, several middleware platforms for Wireless Sensor Networks
(WSN) were proposed. Most of these platforms does not consider issues of how
integrate components from generic middleware architectures. Many
requirements need to be considered in a middleware design for WSN and the
design, in this case, it is possibility to modify the source code of the middleware
without changing the external behavior of the middleware. Thus, it is desired
that there is a middleware generic architecture that is able to offer an optimal
configuration according to the requirements of the application. The adoption of
middleware based in component model consists of a promising approach
because it allows a better abstraction, low coupling, modularization and
management features built-in middleware. Another problem present in current
middleware consists of treatment of interoperability with external networks to
sensor networks, such as Web. Most current middleware lacks the functionality
to access the data provided by the WSN via the World Wide Web in order to
treat these data as Web resources, and they can be accessed through
protocols already adopted the World Wide Web. Thus, this work presents the
Midgard, a component-based middleware specifically designed for WSNs,
which adopts the architectural patterns microkernel and REST. The microkernel
architectural complements the component model, since microkernel can be
understood as a component that encapsulates the core system and it is
responsible for initializing the core services only when needed, as well as
remove them when are no more needed. Already REST defines a standardized
way of communication between different applications based on standards
adopted by the Web and enables him to treat WSN data as web resources,
allowing them to be accessed through protocol already adopted in the World
Wide Web. The main goals of Midgard are: (i) to provide easy Web access to
data generated by WSN, exposing such data as Web resources, following the
principles of Web of Things paradigm and (ii) to provide WSN application
developer with capabilities to instantiate only specific services required by the
application, thus generating a customized middleware and saving node
resources. The Midgard allows use the WSN as Web resources and still provide
a cohesive and weakly coupled software architecture, addressing
interoperability and customization. In addition, Midgard provides two services
needed for most WSN applications: (i) configuration and (ii) inspection and
adaptation services. New services can be implemented by others and easily
incorporated into the middleware, because of its flexible and extensible
architecture. According to the assessment, the Midgard provides interoperability
between the WSN and external networks, such as web, as well as between
different applications within a single WSN. In addition, we assessed the memory
consumption, the application image size, the size of messages exchanged in
the network, and response time, overhead and scalability on Midgard. During
the evaluation, the Midgard proved satisfies their goals and shown to be
scalable without consuming resources prohibitively / Nos ?ltimos anos, foram propostas diversas solu??es de plataformas de
middleware para Redes de Sensores Sem Fio (RSSF). A maioria dessas
plataformas n?o considera quest?es de como integrar os componentes a partir
de arquiteturas de middleware gen?ricas. Muitos requisitos necessitam ser
considerados em um projeto de middleware para RSSF e um aspecto
desejado, neste caso, consiste na possibilidade de modificar o c?digo fonte do
middleware sem mudar o comportamento externo do middleware. Assim, ?
almejado que exista uma arquitetura gen?rica de middleware que seja capaz
de oferece uma configura??o otimizada de acordo com os requisitos da
aplica??o que se deseje atender a cada momento. A ado??o de middleware
baseados em modelo de componentes consiste em uma abordagem
promissora, pois permite uma melhor abstra??o, desaclopamento,
modulariza??o e gerenciamento das funcionalidades internas do middleware.
Outro problema presente nos middleware atuais consiste no tratamento da
interoperabilidade com redes externas ?s RSSF, como por exemplo, a Web. A
maioria dos middleware atuais n?o disp?e da funcionalidade de acessar os
dados providos pela RSSF atrav?s da World Wide Web, de forma a tratar
esses dados como recursos Web e que eles possam ser acessados atrav?s de
protocolos j? adotados na World Wide Web. Diante dessas quest?es, esta
disserta??o apresenta o Midgard, um middleware baseado em componentes
especificamente concebido para RSSFs, que adota os padr?es arquiteturais
microkernel e REST. O padr?o arquitetural microkernel complementa a
estrat?gia arquitetural baseada em modelo de componentes, haja vista que o
microkernel pode ser compreendido como um componente que encapsula o
n?cleo do sistema, sendo esse n?cleo encarregado de inicializar apenas os
servi?os necess?rios, assim como remov?-los quando n?o s?o mais
necess?rios. J? o padr?o REST define uma forma padronizada e leve de
comunica??o entre diferentes aplica??es baseada nos padr?es adotados na
Web e atrav?s dele possibilita tratar os dados da RSSF como recursos Web,
permitindo que sejam acessados atrav?s de protocolo j? adotado na World
Wide Web. Os dois principais objetivos do Midgard s?o (i) prover f?cil acesso
via Web aos dados gerados pela RSSF, tratando tais dados como recursos
Web, seguindo os princ?pios do paradigma Web of Things, e (ii) prover aos
desenvolvedores de aplica??es para RSSF capacidades para a instancia??o
apenas dos servi?os espec?ficos exigidos pela aplica??o, dessa forma gerando
um middleware customizado e economizando recursos dos n?s. O Midgard
permite utilizar a RSSF como recursos Web e ainda prover uma arquitetura de
software coesa e fracamente acoplada, endere?ando interoperabilidade e
customiza??o no mesmo middleware. Al?m disso, prov? dois servi?os
necess?rios para a maior parte das aplica??es de RSSF, os servi?os de
configura??o e o de inspe??o e adapta??o. Novos servi?os podem ser
implementados por terceiros e facilmente incorporados ao middleware, gra?as
a sua arquitetura flex?vel e extens?vel. De acordo com a avalia??o realizada, o
Midgard prov? interoperabilidade entre a RSSF e redes externas, como a Web,
assim como entre aplica??es distintas dentro de uma mesma RSSF. Al?m
disso, foram avaliados o consumo de mem?ria do Midgard, o tamanho da
imagem da aplica??o, o tamanho das mensagens trafegadas na rede, assim
como tempo de resposta, sobrecarga e escalabilidade. Durante a avalia??o
realizada o Midgard provou cumprir seus objetivos e demonstrou ser escal?vel
sem consumir recursos proibitivamente
|
209 |
Um servi?o de certifica??o digital para plataformas de middlewareBatista, Caio Sergio de Vasconcelos 19 May 2006 (has links)
Made available in DSpace on 2014-12-17T15:48:08Z (GMT). No. of bitstreams: 1
CaioSVB.pdf: 870470 bytes, checksum: a842971c4d35e47c9f8084ff650e5651 (MD5)
Previous issue date: 2006-05-19 / Nowadays due to the security vulnerability of distributed systems, it is needed mechanisms to guarantee the security requirements of distributed objects communications. Middleware Platforms component integration platforms provide security functions that typically offer services for auditing, for guarantee messages protection, authentication, and access control. In order to support these functions, middleware platforms use digital certificates that are provided and managed by external entities. However, most middleware platforms do not define requirements to get, to maintain, to validate and to delegate digital certificates. In addition, most digital certification systems use X.509 certificates that are complex and have a lot of attributes. In order to address these problems, this work proposes a digital certification generic service for middleware platforms. This service provides flexibility via the joint use of public key certificates, to implement the authentication function, and attributes certificates to the authorization function. It also supports delegation. Certificate based access control is transparent for objects. The proposed service defines the digital certificate format, the store and retrieval system, certificate validation and support for delegation. In order to validate the proposed architecture, this work presents the implementation of the digital certification service for the CORBA middleware platform and a case study that illustrates the service functionalities / Atualmente, plataformas de integra??o de componentes, tamb?m chamadas de plataformas de middleware, t?m tido um importante papel no suporte ao desenvolvimento de sistemas distribu?dos. Em rela??o a controle de acesso, plataformas de middleware t?m utilizado certificados digitais, para verificar a autenticidade de um elemento, em conjunto com controle de acesso baseado em pap?is, para identificar quais opera??es poder?o ser acessadas por tal elemento. Apesar dos certificados terem um papel fundamental no suporte a seguran?a em plataformas de middleware, a maioria delas n?o define requisitos para obten??o, manuten??o, valida??o e delega??o de certificados. Esse trabalho tem como objetivo propor um servi?o gen?rico para certifica??o digital em plataformas de middleware. Esse servi?o deve oferecer flexibilidade atrav?s do uso conjunto de certificados de chave p?blica e certificados de atributos de forma a distinguir a fun??o de autentica??o da fun??o de autoriza??o. Os certificados de atributos d?o suporte ao controle de acesso baseado em pap?is. A flexibilidade tamb?m deve ser endere?ada atrav?s do suporte a delega??o. Na implementa??o para diferentes plataformas de middleware o controle de acesso baseado em certificados deve funcionar de forma transparente para os objetos. De forma a validar o servi?o pretende-se implement?-lo e test?-lo no contexto da plataforma de middleware CORBA, amplamente utilizada atualmente
|
210 |
[en] GRIDFS: SERVER FOR GRIDS AND HETEROGENEOUS DISTRIBUTED ENVIRONMENTS / [pt] GRIDFS: UM SERVIDOR DE ARQUIVOS PARA GRADES E AMBIENTES DISTRIBUÍDOS HETEROGÊNEOSMARCELO NERY DOS SANTOS 30 October 2006 (has links)
[pt] A computação em grade permite o uso de recursos
computacionais distribuídos em várias redes para a
execução de tarefas que requerem um alto poder
computacional. Uma infra-estrutura para grades pode ser
utilizada para auxiliar na execução dessas tarefas e pode
coordenar o controle das atividades envolvidas na
execução, como a disponibilização dos arquivos de dados
para as tarefas em execução nos nós da grade. O GridFS é
um sistema para o compartilhamento de arquivos em grades e
ambientes distribuídos heterogêneos. Ao disponibilizar um
servidor em diversas máquinas, é possível construir uma
federação integrando os diversos sistemas de arquivos
locais e abrindo possibilidades de armazenamento na ordem
de terabytes. O sistema proposto foi modelado e
desenvolvido levando em consideração diversos aspectos
como escalabilidade, interoperabilidade e desempenho. O
GridFS agrega algumas características dos sistemas de
compartilhamento de arquivos atualmente em uso pela
comunidade, isto é, o sistema oferece uma API para acesso
remoto aos dados, disponibiliza a opção de cópia de
arquivos entre diferentes servidores e fornece algumas
funções especiais para os ambientes de computação em
grade, como uma estimativa do tempo de transferência entre
os diversos nós. Além de definir as características e os
aspectos de implementação do sistema, esta dissertação
apresenta alguns resultados experimentais para a
transferência de arquivos na rede e, como forma de
avaliação, discutimos a integração do GridFS ao emph
{framework} CSBase, utilizado no desenvolvimento de
sistemas para computação em grade. / [en] Grid computing allows the use of distributed networks
resources for tasks
requiring a high processing power. A Grid infra-structure
may help in the
execution of these tasks and is able to coordinate their
related activities,
possibly regarding the provision of data files for the
tasks executing in the
grid nodes.
GridFS is a system that enables data sharing in grid and
heterogeneous
distributed environments. By deploying servers over
several nodes, it is
possible to build a federated system integrating all local
file systems and
leveraging possibilities for tera-scale sized data
storage. The proposed
system was modeled and developed considering several
aspects such as
scalability, interoperability and performance.
GridFS combines some characteristics from diverse file
management systems,
that is, GridFS provides an API for remote data access,
copy operations
allowing file transfers between servers, and some special
features for
grid environments. Apart from defining system
characteristics and implementation
aspects, this dissertation shows some experimental results
about
the service scalability and performance, and, as an
evaluation, discusses the
integration of GridFS with CSBase, a framework used to
develop systems
for grid computing.
|
Page generated in 0.0883 seconds