• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 76
  • 14
  • 8
  • 7
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 67
  • 51
  • 50
  • 41
  • 38
  • 36
  • 35
  • 34
  • 31
  • 28
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

WSPE : um ambiente de programação peer-to-peer para a computação em grade / WSPE : a peer-to-peer programming environment for grid computing

Rosinha, Rômulo Bandeira January 2007 (has links)
Um ambiente de programação é uma ferramenta de software resultante da associa ção de um modelo de programação a um sistema de execução. O objetivo de um ambiente de programação é simpli car o desenvolvimento e a execução de aplicações em uma determinada infra-estrutura computacional. Uma infra-estrutura de Computa ção em Grade apresenta características peculiares que tornam pouco e cientes ambientes de programação existentes para infra-estruturas mais tradicionais, como máquinas maciçamente paralelas ou clusters de computadores. Este trabalho apresenta o WSPE, um ambiente de programação peer-to-peer para Computação em Grade. O WSPE oferece suporte para aplicações grid-unaware que seguem o modelo de programação de tarefas paralelas. A interface de programação WSPE é de nida através de anotações da linguagem Java. O sistema de execu- ção segue um modelo peer-to-peer totalmente descentralizado com o propósito de obter robustez e escalabilidade. Embora um sistema de execução necessite abordar diversos aspectos para se tornar completo, a concepção do sistema de execução WSPE aborda aspectos de desempenho, portabilidade, escalabilidade e adaptabilidade. Para tanto foram desenvolvidos ou adaptados mecanismos para as funções de escalonamento, de construção da rede de sobreposição e de suporte ao paralelismo adaptativo. O mecanismo de escalonamento empregado pelo sistema de execução WSPE é baseado na idéia de roubo de trabalho e utiliza uma nova estratégia que resulta em uma e ciência até cinco vezes superior quando comparada com uma estrat égia mais tradicional. Experimentos realizados com um protótipo do WSPE e também por simulação demonstram a viabilidade do ambiente de programação proposto. / A programming environment is a software tool resulting from the association of a programming model to a runtime system. The goal of a programming environment is to simplify application development and execution on a given computational infrastructure. A Grid Computing infrastructure presents peculiar characteristics that make less e cient existing programming environments designed for more traditional infrastructures, such as massively parallel machines or clusters of computers. This work presents WSPE, a peer-to-peer programming environment for Grid Computing. WSPE provides support for grid-unaware applications following the task parallelism programming model. WSPE programming interface is de ned using annotations from the Java language. The runtime system follows a fully decentralized peer-to-peer model. Although several aspects must be considered in order for a runtime system to become complete, WSPE runtime system's conception considers only performance, portability, scalability and adaptability. For this purpose, mechanisms have been developed or adapted to handle scheduling, overlay network building and adaptive parallelism support functions. The scheduling mechanism employed by WSPE's runtime system is based on the idea of work stealing and uses a new strategy resulting on four times higher e ciency when compared to a more traditional strategy. Conducted experiments with WSPE's prototype and also using a simulation tool demonstrate the proposed programming environment feasibility.
162

Uma arquitetura baseada em políticas para o provimento de QoS utilizando princípios de Autonomic Computing / A policy-based architecture for QoS provisioning using autonomic computing principles

Franco, Theo Ferreira January 2008 (has links)
Sistemas corporativos modernos cada vez mais dependentes da rede e a integração de serviços entorno do modelo TCP/IP elevam a exigência de Qualidade de Serviço da infraestrutura de TI. Neste cenário, o dinamismo das redes atuais em conjunto com os novos requisitos de QoS exigem que a infra-estrutura de TI seja mais autônoma e confiável. Para tratar esta questão, o modelo de Gerenciamento de Redes Baseado em Políticas, proposto pelo IETF, vem se consolidando como uma abordagem para controlar o comportamento da rede através do controle das configurações dos seus dispositivos. Porém, o foco deste modelo é o gerenciamento de políticas internas a um domínio administrativo. Esta característica faz com que o modelo possua algumas limitações, tais como a incapacidade de estabelecer qualquer tipo de coordenação entre diferentes PDPs e a impossibilidade de reagir a eventos externos. Visando agregar autonomia ao modelo de gerenciamento baseado em políticas, este trabalho propõe uma arquitetura em camadas que empregue os conceitos de Autonomic Computing relacionados a: i) adaptação dinâmica dos recursos gerenciados em resposta às mudanças no ambiente, ii) integração com sistemas de gerenciamento de outros domínios através do recebimento de notificações destes, iii) capacidade de planejar ações de gerenciamento e iv) promoção de ações de gerenciamento que envolvam mais de um domínio administrativo, estabelecendo uma espécie de coordenação entre PDPs. Para a implementação destes conceitos, a arquitetura prevê o uso de uma camada peerto- peer (P2P) sobre a plataforma de políticas. Desta forma, a partir de uma notificação recebida, a camada P2P planeja ações visando adaptar o comportamento da rede aos eventos ocorridos na infra-estrutura de TI. As ações planejadas traduzem-se em inclusões ou remoções de políticas da plataforma de políticas responsável por gerenciar a configuração dos dispositivos de rede. Para notificações que envolvam recursos de mais de um domínio administrativo, os peers de gerenciamento agem de forma coordenada para implantar as devidas ações em cada domínio. A arquitetura proposta foi projetada com foco em prover QoS em uma rede com suporte à DiffServ, embora acredite-se que a sua estrutura seja genérica o bastante para ser aplicada a outros contextos. Como estudo de caso, foi analisado o emprego da arquitetura em resposta a eventos gerados por uma grade computacional. Foi elaborado ainda um protótipo da arquitetura utilizando o Globus Toolkit 4 como fonte de eventos. / Modern corporative systems becoming more dependent of the network and the integration of services around the TCP/IP model increase the requirement of Quality of Service (QoS) of the IT infrastructure. In this scene, the dynamism of current networks together with the new requirements of QoS demands a more autonomous and reliable IT infrastructure. To address this issue, the model of Police Based Network Management, proposed by IETF, has been consolidated as an approach to control the behavior of the network through the control of the configurations of its devices. However, the focus of this model is the management of the policies internal to an administrative domain. This feature brings some limitations to the model, such as the incapacity to establish any kind of coordination between different PDPs and the impossibility to react to external events. Aiming at to add autonomy to the model of Policy Based Network Management, this work proposes a layered architecture based on the concepts of Autonomic Computing related to: i) the dynamic adaptation of the managed resources in response to changes in the environment, ii) integration with management systems of other domains through the reception of notifications of these systems, iii) ability of planning the management actions and iv) execution of multi-domain management actions, establishing a kind of coordination between PDPs. To implement these concepts, the architecture was designed with a peer-to-peer layer above the policy platform. Thus, from a received notification, the P2P layer plans actions aiming to adapt the network behavior in response to the events occurred in the IT infrastructure. The planned actions are, actually, inclusions or removals of policies in the policy platform responsible for the management of the network devices configuration. For notifications related with resources of more than one administrative domain, the management peers act in a coordinated way in order to establish the suitable actions in each domain. The proposed architecture was designed with focus in providing QoS in a network with support to DiffServ, although we believe that its structure is generic enough to be applied to other contexts. As case study, it was analyzed the use of the architecture in response to events generated by a computational grid. Additionally, a prototype of the architecture was build making use of Globus Toolkit 4 as an event source.
163

Um serviço de metadados integrado ao middleware de grade MAG / An integrated service of metadata to middleware of grating MAG

Sousa, Bysmarck Barros de 10 July 2006 (has links)
Made available in DSpace on 2016-08-17T14:53:01Z (GMT). No. of bitstreams: 1 Bysmarck Barros de Sousa.pdf: 4248162 bytes, checksum: 1b6440a084fcf6e55f08f5faaa29694e (MD5) Previous issue date: 2006-07-10 / The computation in grating allows to the integration and sharing of resources as softwares, data and hardware, in institucional and multiinstitucional environments. Recently, had to the great volume of generated data the most varied dominios of application, applications that carry through intensive computation has been wide used. The Mag Cat foca in the challenge of the discovery and publication of the data shared in a grating of computers. MagCat was developed using the technology of software agents, using the great capacity of abstraction of this technology. / A computação em grade permite a integração e compartilhamento de recursos como softwares, dados e périfericos, em ambientes institucionais e multiinstitucionais. Recentemente, devido ao grande volume de dados gerados os mais variados dominios de aplicação, aplicações que realizam computação intensiva tem sido largamente utilizadas. O Mag Cat foca no desafio da descoberta e publicação dos dados compartilhados em uma grade de computadores. MagCat foi desenvolvido usando a tecnologia de agentes de software, utilizando a grande capacidade de abstração desta tecnologia.
164

AGST (Autonomic Grid Simulation Tool): uma ferramenta para modelagem, simulação e avaliação de abordagens autonômicas para grades de computadores / AGST (Autonomic Grid Simulation Tool): a tool for modeling, simulation and evaluation of autonomic approaches to grids of computers

Gomes, Berto de Tácio Pereira 09 March 2012 (has links)
Made available in DSpace on 2016-08-17T14:53:20Z (GMT). No. of bitstreams: 1 dissertacao Berto Pereira Gomes.pdf: 2014882 bytes, checksum: f2db4e98c8101d26cfdc062169be8627 (MD5) Previous issue date: 2012-03-09 / Computer Grids are characterized by the high dynamism of its execution environment, resources and tasks heterogeneity, and high scalability. These features turn tasks such as configuration, maintenance and failure recovery quite challenging and is becoming increasingly difficult to perform them only by human agents. The autonomic computing term denotes computer systems capable of changing their behavior dynamically in response to changes in the execution environment. For achieving this, the software is generally organized following the MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge) model, in which autonomic managers perform of the execution environment sensing activities, context analysis, planning and execution of dynamic reconfiguration actions, based on shared knowledge about the controlled system. Several recent research efforts seek to apply autonomic computing techniques to grid computing, providing more autonomy and reducing the need for human intervention in the maintenance and management of these computing environments, thus creating the concept an autonomic grid. This thesis presents a new simulator tool for assisting the development and evaluation of autonomic grid approaches called AGST (Autonomic Grid Simulation Tool). The major contribution of this tool is the definition and implementation of a simulation model based on the MAPE-K autonomic management cycle, that can be used to simulate the monitoring, analysis and planning, control and execution functions, allowing the simulation of an autonomic computing grid. AGST also provides support for parametric and compositional dynamic adaptations of managed elements. This work also presents two case studies where the proposed tool was successfully used for the modeling, simulation and evaluation of approaches to grid computing. / Grades de computadores são caracterizadas pelo alto dinamismo de seu ambiente de execução, alta heterogeneidade de recursos e tarefas, e por requererem grande escalabilidade. Essas características tornam tarefas como configuração, manutenção e recuperação em caso de falhas bastante desafiadoras e cada vez mais difíceis de serem realizadas exclusivamente por agentes humanos. O termo Computação Autonômica denota sistemas computacionais capazes de mudar seu comportamento dinamicamente em resposta a variações do ambiente de execução. Para isso, o software é geralmente organizado seguindo-se a arquitetura MAPE-K (Monitoring, Analysis, Planning, Execution and Knowledge), na qual gerentes autonômicos realizam as atividades de monitoramento do ambiente de execução, análise de informações de contexto, planejamento e execução de ações de reconfiguração dinâmica, compartilhando algum conhecimento sobre o sistema controlado. Diversos esforços de pesquisa recentes buscam aplicar técnicas de computação autonômica à computação em grade, provendo-se maior autonomia e reduzindo-se a necessidade de intervenção humana na manutenção e gerenciamento destes ambientes computacionais, criando assim o conceito de grade autonômica. Esta dissertação apresenta uma nova ferramenta de simulação que tem por objetivo auxiliar o desenvolvimento e avaliação de abordagens autonômicas para grades de computadores denominada AGST (Autonomic Grid Simulation Tool). A principal contribuição dessa ferramenta é a definição e implementação de um modelo de simulação baseado na arquitetura MAPE-K, que pode ser utilizado para simular todas as funções de monitoramento, analise e planejamento, controle e execução, permitindo assim a simulação de grades autonômicas. AGST provê ainda o suporte à execução de adaptações paramétricas e composicionais dos elementos gerenciados. Este trabalho também apresenta dois estudos de caso nos quais a ferramenta proposta foi utilizada com sucesso no processo de modelagem, simulação e avaliação de abordagens para grades computacionais.
165

Estudo comparativo de técnicas de escalonamento de tarefas dependentes para grades computacionais / Comparative Study of Task Dependent Scheduling Algorithms to Grid Computing

Alvaro Henry Mamani Aliaga 22 August 2011 (has links)
À medida que a ciência avança, muitas aplicações em diferentes áreas precisam de grande poder computacional. A computação em grade é uma importante alternativa para a obtenção de alto poder de processamento, no entanto, esse alto poder computacional deve ser bem aproveitado. Mediante o uso de técnicas de escalonamento especializadas, os recursos podem ser utilizados adequadamente. Atualmente existem vários algoritmos propostos para computação em grade, portanto, é necessário seguir uma boa metodologia para escolher o algoritmo que ofereça melhor desempenho, dadas determinadas características. No presente trabalho comparamos os algoritmos de escalonamento: Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e (c) Path Clustering Heuristic (PCH); cada algoritmo é avaliado com diferentes aplicações e sobre diferentes arquiteturas usando técnicas de simulação, seguindo quatro critérios: (i) desempenho, (ii) escalabilidade, (iii) adaptabilidade e (iv) distribuição da carga do trabalho. Diferenciamos as aplicações para grade em dois tipos: (i) aplicações regulares e (ii) aplicações irregulares; dado que em aplicações irregulares não é facil comparar o critério de escalabilidade. Seguindo esse conjunto de critérios o algoritmo HEFT possui o melhor desempenho e escalabilidade; enquanto que os três algoritmos possuem o mesmo nível de adaptabilidade. Na distribuição de carga de trabalho o algoritmo HEFT aproveita melhor os recursos do que os outros. Por outro lado os algoritmos CPOP e PCH usam a técnica de escalonar o caminho crítico no processador que ofereça o melhor tempo de término, mas essa abordagem nem sempre é a mais adequada. / As science advances, many applications in different areas need a big amount of computational power. Grid computing is an important alternative to obtain high processing power, but this high computational power must be well used. By using specialized scheduling techniques, resources can be properly used. Currently there are several algorithms for grid computing, therefore, is necessary to follow a good methodology to choose an algorithm that offers better performance given certain settings. In this work, we compare task dependent scheduling algorithms: (a) Heterogeneous Earliest Finish Time (HEFT), (b) Critical Path on a Processor (CPOP) e Path Clustering Heuristic (PCH); each algorithm is evaluated with different applications and on different architectures using simulation techniques, following four criterias: (i) performance, (ii) scalability, (iii) adaptability and (iv) workload distribution. We distinguish two kinds of grid applications: (i) regular applications and (ii) irregular applications, since in irregular applications is not easy to compare scalability criteria. Following this set of criteria the HEFT algorithm reaches the best performance and scalability, while the three algorithms have the same level of adaptability. In workload distribution HEFT algorithm makes better use of resources than others. On the other hand, CPOP and PCH algorithms use scheduling of tasks which belong to the critical path on the processor which minimizes the earliest finish time, but this approach is not always the most appropriate.
166

Capacity allocation mechanisms for grid environments

Gardfjäll, Peter January 2006 (has links)
During the past decade, Grid computing has gained popularity as a means to build powerful computing infrastructures by aggregating distributed computing capacity. Grid technology allows computing resources that belong to different organizations to be integrated into a single unified system image – a Grid. As such, Grid technology constitutes a key enabler of large-scale, crossorganizational sharing of computing resources. An important objective for the Virtual Organizations (VOs) that result from such sharing is to tame the distributed capacity of the Grid in order to manage it and make fair and efficient use of the pooled computing resources. Most Grids to date have, however, been completely unregulated, essentially serving as a “source of free CPU cycles” for authorized Grid users. Whenever unrestricted access is admitted to a shared resource there is a risk of overexploitation and degradation of the common resource, a phenomenon often referred to as “the tragedy of the commons”. This thesis addresses this problem by presenting two complementary Grid capacity allocation systems that allow the aggregate computing capacity of a Grid to be divided between users in order to protect the Grid from overuse while delivering fair service that satisfies the individual computational needs of different user groups. These two Grid capacity allocation mechanisms constitute the core contribution of this thesis. The first mechanism, the SweGrid Accounting System (SGAS), addresses the need for coordinated soft, real-time quota enforcement across Grid sites. The SGAS project was an early adopter of the serviceoriented principles that are now common practice in the Grid community, and the system has been tested in the Swegrid production environment. Furthermore, SGAS has been included in the Globus Toolkit, the de-facto standard Grid middleware toolkit. SGAS employs a credit-based allocation model where research projects are granted quota allowances that can be spent across the Grid resources, which charge users for their resource consumption. This enforcement of usage limits thus produces real-time overuse protection. The second approach, employed by the Fair Share Grid (FSGrid) system, uses a share-based allocation model where project entitlements are expressed in terms of hierarchical share policies that logically divide the Grid capacity between user groups. By coordinating local job scheduling to maintain these global capacity shares, the Grid resources collectively strive to schedule users for a “share of the Grid”. We refer to this cooperative scheduling model as decentralized Grid-wide fairshare scheduling.
167

Stochastic approach to Brokering heuristics for computational grids / Approche stochastique d'heuristiques de méta-ordonnancement dans les grilles de calcul

Berten, Vandy 08 June 2007 (has links)
Computational Grids are large infrastructures composed of several components such as clusters, or massively parallel machines, generally spread across a country or the world, linked together through some network such as Internet, and allowing a transparent access to any resource. Grids have become unavoidable for a large part of the scientific community requiring computational power such as high-energy physics, bioinformatics or earth observation. Large projects are emerging, often at an international level, but even if Grids are on the way of being efficient and user-friendly systems, computer scientists and engineers still have a huge amount of work to do in order to improve their efficiency. Amongst a large number of problems to solve or to improve upon, the problem of scheduling the work and balancing the load is of first importance.<p><p><p>This work concentrates on the way the work is dispatched on such systems, and mainly on how the first level of scheduling – generally name brokering, or meta-sheduling – is performed. We deeply analyze the behavior of popular strategies, compare their efficiency, and propose a new very efficient brokering policy providing notable performances, attested by the large number of simulations we performed and provided in the document.<p><p><p>The work is mainly split in two parts. After introducing the mathematical framework on which the following of the manuscript is based, we study systems where the grid brokering is done without any feed-back information, i.e. without knowing the current state of the clusters when the resource broker – the grid component receiving jobs from clients and performing the brokering – makes its decision. We show here how a computational grid behaves if the brokering is done is such a way that each cluster receives a quantity of work proportional to its computational capacity.<p><p><p>The second part of this work is rather independent from the first one, and consists in the presentation of a brokering strategy, based on Whittle's indices, trying to minimize as much as possible the average sojourn time of jobs. We show how efficient the proposed strategy is for computational grids, compared to the ones popular in production systems. We also show its robustness to several parameter changes, and provide several very efficient algorithms allowing to make the required computations for this index policy. We finally extend our model in several directions.<p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
168

Protecting grid computing networks from cross-domain attacks using security alert sharing mechanisms and classification of administrative domains in security levels / Protection des réseaux de calcul de grille contre les attaques interdomaines. Utilisation des mécanismes de partage d'alertes de sécurité et classification des domaines administratifs dans les niveaux de sécurité

Syed, Raheel Hassan 20 July 2012 (has links)
Ces dernières années, la sécurité est devenue un défi dans les réseaux informatiques. Les logiciels anti-virus, les pare-feu et les systèmes de détection d'intrusion ne suffisent pas à empêcher les attaques sophistiquées fabriquées par plusieurs utilisateurs. Les réseaux informatiques de grille sont souvent composés de différents domaines administratifs appartenant à différentes organisations. Chaque domaine peut avoir sa propre politique de sécurité et ne pas vouloir partager ses données de sécurité avec des réseaux moins protégés. Il est donc plus complexe d'assurer la sécurité de ces réseaux et de les protéger des attaques interdomaines. La principale difficulté est de traiter la nature distinctive de l'infrastructure du réseau, à savoir: les réseaux multi-sites, les domaines multi-administratifs, la collaboration dynamique entre les nœuds et les sites, le nombre élevé de nœuds à gérer, l'absence de vue claire des réseaux externes et l'échange d'informations de sécurité entre différents domaines administratifs. Pour gérer les problèmes mentionnés ci-dessus, je propose un Security Event Manager (SEM) appelé Grid Security Operation Center (GSOC). GSOC peut aider les responsables de la sécurité informatique à donner une vision de la sécurité de l'ensemble du réseau sans compromettre la confidentialité des données de sécurité. Pour ce faire, GSOC fournit une évaluation de sécurité de chaque domaine administratif (AD) en fonction du nombre d'alertes de sécurité signalées. Il y a trois niveaux de sécurité définis: le niveau 1 est le plus sécurisé, le niveau 2 est le plus sécurisé et le niveau 3 est le moins sécurisé. Cette classification aide à identifier les AD qui sont sous les attaques ou les AD qui sont à haut risque d'être attaqué à l'avenir. Un mécanisme de corrélation en deux temps est proposé, ce qui réduit les alertes de sécurité et continue à détecter les attaques dans le cadre d'attaques distribuées intensives. Un schéma de partage d'alertes de sécurité paramétrique a été introduit. Les alertes de sécurité peuvent être partagées à tout moment entre les membres du réseau informatique. Ce partage d'alertes informe les membres participants à voir les attaques en cours dans les autres locaux des AD sans interférer dans la politique de sécurité. Ce concept de partage d'alertes de sécurité a été discuté dans le passé mais n'a jamais été mis en œuvre. GSOC est la première mise en œuvre de cette idée à la fine pointe de la technologie. Ce partage d'alertes permet de bloquer la propagation des réseaux inter-domaines dans les réseaux informatiques de grille. / In recent years security is becoming a challenge in grid computing networks. Anti-virus softwares, firewalls and intrusion detection systems are not enough to prevent sophisticated attacks fabricated by multiple users. Grid computing networks are often composed of different administrative domains owned by different organizations. Each domain can have its own security policy and may not want to share its security data with less protected networks. It is therefore more complex to ensure the security of such networks and to protect them from cross-domain attacks. The main difficulty is to deal with the distinguish nature of grid infrastructure, that are: multi-sites networks, multi-administrative domains, dynamic collaboration between nodes and sites, high number of nodes to manage, no clear view of the external networks and exchange of security information among different administrative domains. To handle the above mentioned issues, I am proposing a Security Event Manager (SEM) called Grid Security Operation Center (GSOC). GSOC can assist IT security managers in giving a view of the security of the whole grid network without compromising confidentiality of security data. To do so, GSOC provides a security evaluation of each administrative domain (AD) depending on the number of security alerts reported. There are three security levels defined as level 1 is the most secure, level 2 is the more secure and level 3 is the least secure. This classification helps to identify the ADs that are under attacks or the ADs that are at high risk of being attacked in future. A two step time based correlation mechanism is proposed which reduces the security alerts and continue detecting attacks under intense distributed attacks. A parametric security alerts sharing scheme has been introduced. Security alerts can be shared at any time between the members of the grid computing network. This alert sharing informs the participating members to see the ongoing attacks on the other premises of the ADs without interfering in the security policy. This security alert sharing concept has been discussed in past but never implemented. GSOC is the first state of the art implementation of this idea. This alert sharing helps in blocking the propagation of cross-domain networks in grid computing networks.
169

Towards ensuring scalability, interoperability and efficient access control in a triple-domain grid-based environment

Nureni Ayofe, Azeez January 2012 (has links)
Philosophiae Doctor - PhD / The high rate of grid computing adoption, both in academe and industry, has posed challenges regarding efficient access control, interoperability and scalability. Although several methods have been proposed to address these grid computing challenges, none has proven to be completely efficient and dependable. To tackle these challenges, a novel access control architecture framework, a triple-domain grid-based environment, modelled on role based access control, was developed. The architecture’s framework assumes three domains, each domain with an independent Local Security Monitoring Unit and a Central Security Monitoring Unit that monitors security for the entire grid.The architecture was evaluated and implemented using the G3S, grid security services simulator, meta-query language as “cross-domain” queries and Java Runtime Environment 1.7.0.5 for implementing the workflows that define the model’s task. The simulation results show that the developed architecture is reliable and efficient if measured against the observed parameters and entities. This proposed framework for access control also proved to be interoperable and scalable within the parameters tested.
170

Création d'un environnement de gestion de base de données "en grille" : application à l'échange de données médicales / Creating a "grid" database management environment : application to medical data exchange

De Vlieger, Paul 12 July 2011 (has links)
La problématique du transport de la donnée médicale, de surcroît nominative, comporte de nombreuses contraintes, qu’elles soient d’ordre technique, légale ou encore relationnelle. Les nouvelles technologies, issues particulièrement des grilles informatiques, permettent d’offrir une nouvelle approche au partage de l’information. En effet, le développement des intergiciels de grilles, notamment ceux issus du projet européen EGEE, ont permis d’ouvrir de nouvelles perspectives pour l’accès distribué aux données. Les principales contraintes d’un système de partage de données médicales, outre les besoins en termes de sécurité, proviennent de la façon de recueillir et d’accéder à l’information. En effet, la collecte, le déplacement, la concentration et la gestion de la donnée, se fait habituellement sur le modèle client-serveur traditionnel et se heurte à de nombreuses problématiques de propriété, de contrôle, de mise à jour, de disponibilité ou encore de dimensionnement des systèmes. La méthodologie proposée dans cette thèse utilise une autre philosophie dans la façon d’accéder à l’information. En utilisant toute la couche de contrôle d’accès et de sécurité des grilles informatiques, couplée aux méthodes d’authentification robuste des utilisateurs, un accès décentralisé aux données médicales est proposé. Ainsi, le principal avantage est de permettre aux fournisseurs de données de garder le contrôle sur leurs informations et ainsi de s’affranchir de la gestion des données médicales, le système étant capable d’aller directement chercher la donnée à la source.L’utilisation de cette approche n’est cependant pas complètement transparente et tous les mécanismes d’identification des patients et de rapprochement d’identités (data linkage) doivent être complètement repensés et réécris afin d’être compatibles avec un système distribué de gestion de bases de données. Le projet RSCA (Réseau Sentinelle Cancer Auvergne – www.e-sentinelle.org) constitue le cadre d’application de ce travail. Il a pour objectif de mutualiser les sources de données auvergnates sur le dépistage organisé des cancers du sein et du côlon. Les objectifs sont multiples : permettre, tout en respectant les lois en vigueur, d’échanger des données cancer entre acteurs médicaux et, dans un second temps, offrir un support à l’analyse statistique et épidémiologique. / Nominative medical data exchange is a growing challenge containing numerous technical, legislative or relationship barriers. New advanced technologies, in the particular field of grid computing, offer a new approach to handle medical data exchange. The development of the gLite grid middleware within the EGEE project opened new perspectives in distributed data access and database federation. The main requirements of a medical data exchange system, except the high level of security, come from the way to collect and provide data. The original client-server model of computing has many drawbacks regarding data ownership, updates, control, availability and scalability. The method described in this dissertation uses another philosophy in accessing medical data. Using the grid security layer and a robust user access authentication and control system, we build up a dedicated grid network able to federate distributed medical databases. In this way, data owners keep control over the data they produce.This approach is therefore not totally straightforward, especially for patient identification and medical data linkage which is an open problem even in centralized medical systems. A new method is then proposed to handle these specific issues in a highly distributed environment. The Sentinelle project (RSCA) constitutes the applicative framework of this project in the field of cancer screening in French Auvergne region. The first objective is to allow anatomic pathology reports exchange between laboratories and screening structures compliant with pathologists’ requirements and legal issues. Then, the second goal is to provide a framework for epidemiologists to access high quality medical data for statistical studies and global epidemiology.

Page generated in 0.1438 seconds