• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 25
  • 11
  • 8
  • 8
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 159
  • 39
  • 20
  • 19
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

BSPONP2P: modelo para exploração da computação colaborativa em aplicações BSP para ambientes grades P2P

Veith, Alexandre da Silva 29 August 2014 (has links)
Submitted by Mariana Dornelles Vargas (marianadv) on 2015-05-19T19:12:55Z No. of bitstreams: 1 BSPonP2P.pdf: 2195234 bytes, checksum: 340cdfee0a4f8e27bd919cc3d5513c45 (MD5) / Made available in DSpace on 2015-05-19T19:12:55Z (GMT). No. of bitstreams: 1 BSPonP2P.pdf: 2195234 bytes, checksum: 340cdfee0a4f8e27bd919cc3d5513c45 (MD5) Previous issue date: 2014-08-29 / Nenhuma / Tecnologias constantemente estão avançando nas áreas de sistemas distribuídos e computação paralela. Isso acontece porque a compra de equipamentos eletrônicos esta acessível, por isso empresas, cada vez mais, estão apostando em soluções baratas para que todos tenham acesso. Consequentemente, existe um problema que é o desperdício na utilização destes equipamentos. Grande parte de seu tempo esses equipamentos ficam ociosos. Nesse contexto, esta dissertação apresenta o modelo BSPonP2P para minimizar o problema, buscando aproveitar esse desperdício para algum fim útil. O BSPonP2P utiliza uma abordagem de Computação em Grade P2P, ela visa utilizar os equipamentos para execução de computação útil. O objetivo e fazer as execuções de maneira concorrente. O BSPonP2P cria um ambiente com abordagens baseadas nos modelos estruturado e não estruturado vindos da arquitetura P2P, o que foi implementado para agilizar o gerenciamento da comunicação e das informações dentro da rede. Outro diferencial do modelo proposto e a utilização do modelo de programação paralela Bulk Synchronous Parallel (BSP), que cria um ambiente para execução de processos, validando as dependências e aprimorando a comunicação entre eles. A partir de avaliações de métricas como memória, computação, comunicação e dados de equipamentos, e criado um índice denominado de PM. Esse índice é avaliado periodicamente para definir migrações conforme a variável de ambiente a , que esta diretamente ligada as barreiras das supersteps. A partir de avaliações obtidas nas distribuições de processos em um ambiente heterogêneo criado, o modelo BSPonP2P se mostrou eficaz porque ele obteve bons resultados, como, por exemplo, na simples execução da aplicação, comparando com a execução do BSPonP2P, houve um aumento menor que 4% no tempo de execução. Além disso, na execução de 26 processos com 2000 Supersteps e a = 16, obteve-se um ganho de 6% a partir de 24 migrações de processos. Sendo assim, como contribuição científica, optou-se pela utilização de redes em Grades P2P com aplicações BSP, usando métricas como memória, computação, comunicação e dados de equipamentos para avaliação do ambiente. Além de, serviços como migração e checkpoint que possibilitaram um bom desempenho do modelo. / Technologies are constantly advancing in the areas of distributed systems and parallel computing. This is because the purchase of electronic equipment is accessible, so companies increasingly are betting on cheap solutions for every one to access. Accordingly, there is a problem that the wasteful use of such equipment. Most of these have access to the execution of computation, however, a large part of their time sit idle. In this context, this dissertation proposal presents BSPonP2P model to minimize the problem trying to enjoy this waste for any useful purpose. In the proposed model, a P2P Desktop Grid that seeks to use equipment to perform useful computing competitor among its users and Desktop Grid network users way approach will be used. The BSPonP2P will create an environment with models based on structured and unstructured P2P architecture approaches coming, that will be implemented to streamline the management and communication of information within the network. Another difference that the proposed model will have is the use of Bulk Synchronous Parallel (BSP) parallel programming model, which creates an environment for process execution dependencies validating and improving the communication between them. From reviews of metrics such as memory, computation, data communications equipment and an index called PM is created. This index is periodically valuated to define migration as the environment variable α, which is directly linked to the supersteps’ barriers. Based on the ratings obtained from the distributions of processes in a heterogeneous environment created, BSPonP2P model is demonstrated effective. This is because the model had good results, for example, the simple execution application compared to running the BSPonP2P there was a smaller increase than 4% in execution time. Moreover, the implementation of 26 cases with 2000 supersteps and alpha = 16 yielded a gain of 6% from 24 migration process. Thus, atom scientific contribution opted for the use of networks in P2P Grids with BSP applications using metrics such as memory, computation, communication and data equipment for environmental assessment.
132

Um ambiente de execução para suporte à programação paralela com variáveis compartilhadas em sistemas distribuídos heterogêneos. / A runtime system for parallel programing with shared memory paradigm over a heterogeneus distributed systems.

Craveiro, Gisele da Silva 31 October 2003 (has links)
O avanço na tecnologia de hardware está permitindo que máquinas SMP de 2 a 8 processadores estejam disponíveis a um custo cada vez menor, possibilitando que a incorporação de tais máquinas em aglomerados de PC's ou até mesmo a composição de um aglomerado de SMP's sejam alternativas cada vez mais viáveis para computação de alto desempenho. O grande desafio é extrair o potencial que tal conjunto de máquinas oferece. Uma alternativa é usar um paradigma híbrido de programação para aproveitar a arquitetura de memória compartilhada através de multihreadeing e utilizar o modelo de troca de mensagens para comunicação entre os nós. Contudo, essa estratégia impõe uma tarefa árdua e pouco produtiva para o programador da aplicação. Este trabalho apresenta o sistema CPAR- Cluster que oferece uma abstração de memória compartilhada no topo de um aglomerado formado por nós mono e multiprocessadores. O sistema é implementado no nível de biblioteca e não faz uso de recursos especiais tais como hardware especializado ou alteração na camada de sistema operacional. Serão apresentados os modelos, estratégias, questões de implementação e os resultados obtidos através de testes realizados com a ferramenta e que apresentaram comportamento esperado. / The advance in hardware technologies is making small configuration SMP machines (from 2 to 8 processors) available at a low cost. For this reason, the inclusion of an SMP node into a cluster of PCs or even clusters of SMPs are becoming viable alternatives for high performance computing. The challenge is the exploitation of the computational resources that these platforms provide. A Hybrid programming paradigm which uses shared memory architecture through multihreading and also message passing model for inter node communication is an alternative. However, programming in such paradigm is very hard. This thesis presents CPAR- Cluster, a runtime system, that provides shared memory abstraction on top of a cluster composed by mono and multiprocessor nodes. Its implementation is at the library level and doesn't require special resources such as particular hardware or operating system moditfications. Models, strategies, implementation aspects and results will be presented.
133

"Índices de carga e desempenho em ambientes paralelos/distribuídos - modelagem e métricas" / Load and Performance Index for Parallel/Distributed System - Modelling and Metrics

Kalinka Regina Lucas Jaquie Castelo Branco 15 December 2004 (has links)
Esta tese aborda o problema de obtenção de um índice de carga ou de desempenho adequado para utilização no escalonamento de processos em sistemas computacionais heterogêneos paralelos/distribuídos. Uma ampla revisão bibliográfica com a correspondente análise crítica é apresentada. Essa revisão é a base para a comparação das métricas existentes para a avaliação do grau de heterogeneidade/homogeneidade dos sistemas computacionais. Uma nova métrica é proposta neste trabalho, removendo as restrições identificadas no estudo comparativo realizado. Resultados de aplicações dessa nova métrica são apresentados e discutidos. Esta tese propõe também o conceito de heterogeneidade/homogeneidade temporal que pode ser utilizado para futuros aprimoramentos de políticas de escalonamento empregadas em plataformas computacionais heterogêneas paralelas/distribuídas. Um novo índice de desempenho (Vector for Index of Performance - VIP), generalizando o conceito de índice de carga, é proposto com base em uma métrica Euclidiana. Esse novo índice é aplicado na implementação de uma política de escalonamento e amplamente testado através de modelagem e simulação. Os resultados obtidos são apresentados e analisados estatisticamente. É demonstrado que o novo índice leva a bons resultados de modo geral e é apresentado um mapeamento mostrando as vantagens e desvantagens de sua adoção quando comparado às métricas tradicionais. / This thesis approaches the problem of evaluating an adequate load index or a performance index, for using in process scheduling in heterogeneous parallel/distributed computing systems. A wide literature review with the corresponding critical analysis is presented. This review is the base for the comparison of the existing metrics for the evaluation of the computing systems homogeneity/heterogeneity degree. A new metric is proposed in this work, removing the restrictions identified during the comparative study realized. Results from the application of the new metric are presented and discussed. This thesis also proposes the concept of temporal heterogeneity/homogeneity that can be used for future improvements in scheduling polices for parallel/distributed heterogeneous computing platforms. A new performance index (Vector for Index of Performance - VIP), generalizing the concept of load index, is proposed based on an Euclidean metric. This new index is applied to the implementation of a scheduling police and widely tested through modeling and simulation. The results obtained are presented and statistically analyzed. It is shown that the new index reaches good results in general and it is also presented a mapping showing the advantages and disadvantages of its adoption when compared with the traditional metrics.
134

MOS - Modelo Ontológico de Segurança para negociação de política de controle de acesso em multidomínios. / MOS - Ontological Security Model for access control policy negotiation in multi-domains.

Yeda Regina Venturini 07 July 2006 (has links)
A evolução nas tecnologias de redes e o crescente número de dispositivos fixos e portáteis pertencentes a um usuário, os quais compartilham recursos entre si, introduziram novos conceitos e desafios na área de redes e segurança da informação. Esta nova realidade estimulou o desenvolvimento de um projeto para viabilizar a formação de domínios de segurança pessoais e permitir a associação segura entre estes domínios, formando um multidomínio. A formação de multidomínios introduziu novos desafios quanto à definição da política de segurança para o controle de acesso, pois é composto por ambientes administrativos distintos que precisam compartilhar seus recursos para a realização de trabalho colaborativo. Este trabalho apresenta os principais conceitos envolvidos na formação de domínio de segurança pessoal e multidomínios, e propõe um modelo de segurança para viabilizar a negociação e composição dinâmica da política de segurança para o controle de acesso nestes ambientes. O modelo proposto é chamado de Modelo Ontológico de Segurança (MOS). O MOS é um modelo de controle de acesso baseado em papéis, cujos elementos são definidos por ontologia. A ontologia define uma linguagem semântica comum e padronizada, viabilizando a interpretação da política pelos diferentes domínios. A negociação da política ocorre através da definição da política de importação e exportação de cada domínio. Estas políticas refletem as contribuições parciais de cada domínio para a formação da política do multidomínio. O uso de ontologia permite a composição dinâmica da política do multidomínio, assim como a verificação e resolução de conflitos de interesses, que refletem incompatibilidades entre as políticas de importação e exportação. O MOS foi validado através da análise de sua viabilidade de aplicação em multidomínios pessoais. A análise foi feita pela definição de um modelo concreto e pela simulação da negociação e composição da política de controle de acesso. Para simulação foi definido um multidomínio para projetos de pesquisa. Os resultados mostraram que o MOS permite a definição de um procedimento automatizável para criação da política de controle de acesso em multidomínios. / The evolution in the network technology and the growing number of portable and fixed devices belonging to a user, which shares resources, introduces new concepts and challenges in the network and information security area. This new reality has motivated the development of a project for personal security domain formation and security association between them, creating a multi-domain. The multi-domain formation introduces new challenges concerning the access control security policy, since multi-domains are composed by independent administrative domains that share resources for collaborative work. This work presents the main concept concerning the personal security domains and multi-domains, and proposes a security model to allow the dynamic security policy negotiation and composition for access control in multi-domain. The proposed model is called MOS, which is an ontological security model. The MOS is a role-based access control model, which elements are defined by an ontology. The ontology defines a semantic language, common and standardized, allowing the policy interpretation by different domains. The policy negotiation is made possible by the definition of the policy importation and exportation in each domain. These policies mean the partial contributions of each domain for the multi-domain policy formation. The use of ontology allows the dynamic multi-domain policy composition, as well as the verification and resolution of interest conflicts. These conflicts mean incompatibilities between the importation and exportation policy. The MOS was validated through the viability analysis for personal multi-domain application. The analysis was made through the definition of a factual model and the simulation of access control policy negotiation and composition. The simulation was taken place through the definition of a collaborative research projects multi-domain. The results demonstrate the MOS is feasible for implementation in automatic procedures for multi-domain access control policy creation.
135

分散式伺服器最佳分割之演算法則 / A Partition Algorithm for the Establishment of Optimal Distributed Servers

陳麗秋, Chen Li-Chiou Unknown Date (has links)
本篇論文以裴氏網(Petri-Net)描述系統,提出一啟發式的演算法則 , 將此裴氏網分割為數個子系統,以便建構為分散式系統中獨立運作的 伺服器。吾人設計一系列的模擬實驗,以測試此演算法的績效。而後,本 篇論文列出影響分割的變數及各變數間的關係,並分析本演算法之特性。 而根據模擬實驗的結果分析,吾人對分散式環境之應用系統發展提出建議 。 / In this thesis, we use the Petri-Net to model a system, and we propose a heuristic algorithm to partition the Petri-Net into several autonomous servers. We then conduct a series of simula- tion experiments to test the performance of the algorithm and to identify the variables which influence the partition of the Petri-Net. Some factors influencing the properties of a partition are identified and their inter-relationships are shown. Based upon the simulation, we provide some suggestions for the develop- ment of the application in the distributed system environment.
136

Data and application migration in cloud based data centers --architectures and techniques

Zhang, Gong 19 May 2011 (has links)
Computing and communication have continued to impact on the way we run business, the way we learn, and the way we live. The rapid technology evolution of computing has also expedited the growth of digital data, the workload of services, and the complexity of applications. Today, the cost of managing storage hardware ranges from two to ten times the acquisition cost of the storage hardware. We see an increasing demand on technologies for transferring management burden from humans to software. Data migration and application migration are one of popular technologies that enable computing and data storage management to be autonomic and self-managing. In this dissertation, we examine important issues in designing and developing scalable architectures and techniques for efficient and effective data migration and application migration. The first contribution we have made is to investigate the opportunity of automated data migration across multi-tier storage systems. The significant IO improvement in Solid State Disks (SSD) over traditional rotational hard disks (HDD) motivates the integration of SSD into existing storage hierarchy for enhanced performance. We developed adaptive look-ahead data migration approach to effectively integrate SSD into the multi-tiered storage architecture. When using the fast and expensive SSD tier to store the high temperature data (hot data) while placing the relatively low temperature data (low data) in the HDD tier, one of the important functionality is to manage the migration of data as their access patterns are changed from hot to cold and vice versa. For example, workloads during day time in typical banking applications can be dramatically different from those during night time. We designed and implemented an adaptive lookahead data migration model. A unique feature of our automated migration approach is its ability to dynamically adapt the data migration schedule to achieve the optimal migration effectiveness by taking into account of application specific characteristics and I/O profiles as well as workload deadlines. Our experiments running over the real system trace show that the basic look-ahead data migration model is effective in improving system resource utilization and the adaptive look-ahead migration model is more efficient for continuously improving and tuning of the performance and scalability of multi-tier storage systems. The second main contribution we have made in this dissertation research is to address the challenge of ensuring reliability and balancing loads across a network of computing nodes, managed in a decentralized service computing system. Considering providing location based services for geographically distributed mobile users, the continuous and massive service request workloads pose significant technical challenges for the system to guarantee scalable and reliable service provision. We design and develop a decentralized service computing architecture, called Reliable GeoGrid, with two unique features. First, we develop a distributed workload migration scheme with controlled replication, which utilizes a shortcut-based optimization to increase the resilience of the system against various node failures and network partition failures. Second, we devise a dynamic load balancing technique to scale the system in anticipation of unexpected workload changes. Our experimental results show that the Reliable GeoGrid architecture is highly scalable under changing service workloads with moving hotspots and highly reliable in the presence of massive node failures. The third research thrust in this dissertation research is focused on study the process of migrating applications from local physical data centers to Cloud. We design migration experiments and study the error types and further build the error model. Based on the analysis and observations in migration experiments, we propose the CloudMig system which provides both configuration validation and installation automation which effectively reduces the configuration errors and installation complexity. In this dissertation, I will provide an in-depth discussion of the principles of migration and its applications in improving data storage performance, balancing service workloads and adapting to cloud platform.
137

Platform Independent Connections to Internet of Things

K.C., Sandeep January 2014 (has links)
In the past few years, technology has been changing by leaps and bounds, within which a new topic has emerged as Internet of Things. These things serve as sensors/actuators, connected to the Internet and enabled to communicate with each other simultaneously in a P2P distributed manner. The sensors/actuators sense and generate contextual data in their surroundings in order to enable real-time context-aware behavior that make them more personalized and intelligent. This contextual information may be useful for human purposes like environment monitoring, home surveillance, elderly care, safety, security surveillance, etc. Moreover, smart mobile devices with incredible features have become hugely popular, the use of the Internet of Things would be much handier using smartphones to interact with sensors and also to generate information with its decorated sensors. The main aim of this thesis work is to create an extension for an add-in layer of the Internet of Things (SensibleThings Platform) architecture that adds functionalities like querying UCI value within the platform, connecting different mobile devices regardless of programming language, which has been done using the REST protocol. Furthermore, the intention is to build a P2P connection between the Java coded SensibleThings platform to a non-Java platform, i.e. iOS, by creating an Objective-C library to support dissemination of contextual information between the discrete platform in a distributed manner using JSON. Two servers have been created using Apache web server and sockets to connect with the Objective-C library to compare the performance of extension and library. The thesis work also presents the implementation of the extension and an Objective-C library, integrated to create proof-of-concept applications by developing an iOS application and Mac OS desktop application that can easily interact with the SensibleThings platform by requests through the REST protocol and getting the UCI value in JSON message format. Moreover, to know the best possible solution for the SensibleThings platform, a hybrid application has also been developed by using PhoneGap and JQueryMobile within XCode, which is compared with the iOS web app, and an evaluation of mobile applications using extension and library with two servers has also been performed. According to the results between the web server and sockets, the sockets act scalable and more stable than the web server when interacting with the SensibleThings platform; when comparing between the iOS and Mac app for performance, there is not much difference. The results also suggest that a hybrid app would be a better solution for the SensibleThings platform; it could be developed with less effort and be useful for a variety of mobile devices, which might be the best solution for the IoT in the future. Lastly, the conclusions includes possible future work to be supplemented, to make the IoT better in future.
138

Um ambiente de execução para suporte à programação paralela com variáveis compartilhadas em sistemas distribuídos heterogêneos. / A runtime system for parallel programing with shared memory paradigm over a heterogeneus distributed systems.

Gisele da Silva Craveiro 31 October 2003 (has links)
O avanço na tecnologia de hardware está permitindo que máquinas SMP de 2 a 8 processadores estejam disponíveis a um custo cada vez menor, possibilitando que a incorporação de tais máquinas em aglomerados de PC's ou até mesmo a composição de um aglomerado de SMP's sejam alternativas cada vez mais viáveis para computação de alto desempenho. O grande desafio é extrair o potencial que tal conjunto de máquinas oferece. Uma alternativa é usar um paradigma híbrido de programação para aproveitar a arquitetura de memória compartilhada através de multihreadeing e utilizar o modelo de troca de mensagens para comunicação entre os nós. Contudo, essa estratégia impõe uma tarefa árdua e pouco produtiva para o programador da aplicação. Este trabalho apresenta o sistema CPAR- Cluster que oferece uma abstração de memória compartilhada no topo de um aglomerado formado por nós mono e multiprocessadores. O sistema é implementado no nível de biblioteca e não faz uso de recursos especiais tais como hardware especializado ou alteração na camada de sistema operacional. Serão apresentados os modelos, estratégias, questões de implementação e os resultados obtidos através de testes realizados com a ferramenta e que apresentaram comportamento esperado. / The advance in hardware technologies is making small configuration SMP machines (from 2 to 8 processors) available at a low cost. For this reason, the inclusion of an SMP node into a cluster of PCs or even clusters of SMPs are becoming viable alternatives for high performance computing. The challenge is the exploitation of the computational resources that these platforms provide. A Hybrid programming paradigm which uses shared memory architecture through multihreading and also message passing model for inter node communication is an alternative. However, programming in such paradigm is very hard. This thesis presents CPAR- Cluster, a runtime system, that provides shared memory abstraction on top of a cluster composed by mono and multiprocessor nodes. Its implementation is at the library level and doesn't require special resources such as particular hardware or operating system moditfications. Models, strategies, implementation aspects and results will be presented.
139

Programming networks with intensional destinations / Programmation distribuée avec destinataires intentionnelles

Ahmad Kassem, Ahmad 04 November 2013 (has links)
La programmation distribuée est une tâche difficile. Elle a énormément gagné en importance avec le développement des réseaux qui supportent un nombre croissant exponentiellement d’applications. Les systèmes distribués fournissent des fonctionnalités assurées par les noeuds qui forment un réseau et échangent des données et services, éventuellement par le biais de messages. La provenance du service n’est souvent pas pertinente, alors que sa fiabilité est essentielle. Notre objectif est de fournir un nouveau modèle de communication qui permet de spécifier intentionnellement lequel service est demandé, et non les noeuds qui le fournissent. Cette spécification intentionnelle des échanges offre un potentiel pour faciliter la programmation distribuée, garantir la persistance des données dans les messages et la résilience des systèmes, qui constituent le sujet de cette thèse. Nous proposons donc un cadre qui supporte des messages avec destinations intentionnelles, qui sont évaluées uniquement à la volée au fur et à mesure du déplacement des messages. Nous introduisons un langage, Questlog, qui gère les destinations intentionnelles. Contrairement aux langages à base de règles existants pour les réseaux, comme Datalog, qui suivent le mode push, Questlog permet d’exprimer des stratégies complexes afin de récupérer de manière récursive des données distribuées en mode pull. Le langage fonctionne sur une machine virtuelle qui s’appuie sur un SGBD. Nous démontrons l’approche avec des exemples pris dans deux domaines: (i) les architectures orientées données, où une classe restreinte d’applications client-serveur sont distribuées de manière transparente sur les systèmes pair-à-pair basés sur une DHT, (ii) les réseaux de capteurs sans fil, où un protocole de groupement des noeuds en clusters virtuels est proposé pour agréger les données. Dans ce protocole, les chefs des clusters sont élus à l’aide des destinations intentionnelles. Nos simulations sur la plate-forme QuestMonitor montre que cette approche offre une simplicité, une modularité aux protocoles, ainsi qu’une fiabilité accrue. / Distributed programming is a challenging task. It has tremendously gained importance with the wide development of networks, which support an exponentially increasing number of applications. Distributed systems provide functionalities that are ensured by nodes which form a network and exchange data and services possibly through messages. The provenance of the service is often not relevant, while its reliability is essential. Our aim is to provide a new communication model which allows to specify intensionally what service is needed as opposed to which nodes provide it. The intensional specification of exchanges offers a potential to facilitate distributed programming, to provide persistence of data in messages and resilience of systems, that constitute the topic of this thesis. We propose a framework that supports messages with intensional destinations, which are evaluated only on the fly while the messages are traveling. We introduce a rule-based language, Questlog, to handle the intensional destinations. In contrast to existing network rule-based languages, which like Datalog follow the push mode, Questlog allows to express complex strategies to recursively retrieve distributed data in pull mode. The language runs over a virtual machine which relies on a DBMS. We demonstrate the approach with examples taken from two domains: (i) data-centric architectures, where a class of restricted client-server applications are seamlessly distributed over peer-to-peer systems based on a DHT, and (ii) wireless sensor networks, where a virtual clustering protocol is proposed to aggregate data, in which cluster heads are elected using intensional destinations. Our simulations on the QuestMonitor platform demonstrates that this approach offers simplicity and modularity to protocols, as well as an increased reliability.
140

Diffusion de modules compilés pour le langage distribué Termite Scheme

Hamel, Frédéric 03 1900 (has links)
Ce mémoire décrit et évalue un système de module qui améliore la migration de code dans le langage de programmation distribuée Termite Scheme. Ce système de module a la possibilité d’être utilisé dans les applications qu’elles soient distribuées ou pas. Il a pour but de faciliter la conception des programmes dans une structure modulaire et faciliter la migration de code entre les nœuds d’un système distribué. Le système de module est conçu pour le système Gambit Scheme, un compilateur et interprète du langage Scheme utilisé pour implanter Termite. Le système Termite Scheme est utilisé pour implémenter les systèmes distribués. Le problème qui est résolu est la diffusion de code compilé entre les nœuds d’un système distribué quand le nœud destination n’a aucune connaissance préalable du code qu’il reçoit. Ce problème est difficile car les nœuds sont hétérogènes, ils ont différentes architectures (x86, ARM). Notre approche permet d’identifier les modules de façon unique dans un contexte dis- tribué. La facilité d’utilisation et la portabilité ont été des facteurs importants dans la conception du système de module. Le mémoire décrit la structure des modules, leur implémentation dans Gambit et leur application. Les qualités du système de module sont démontrées par des exemples et la performance est évaluée expérimentallement. / This thesis presents a module system for Termite Scheme that supports distributed computing. This module system facilitates application modularity and eases code migration between the nodes of a distributed system. This module system also works for developing non-distributed applications. The Gambit Scheme system is used to implement the distributed Termite and the Module system. The problem that is solved is the migration of compiled code between nodes of a distributed system when the receiving node has no prior knowledge of the code. This is a challenging problem because the nodes are not homogenous, they have different architectures (ARM, x86). Our approach uses a naming model for the modules that uniquely identifies them in a distributed context. Both ease of use and portability were important factors in the design of the module system. The thesis describes a module system and how it was integrated into Gambit. The system allows developing distributed modular systems. The features of this system are shown through application examples and the performance is evaluated experimentally.

Page generated in 0.0685 seconds