• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 50
  • 12
  • 8
  • 7
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 195
  • 195
  • 65
  • 47
  • 41
  • 40
  • 37
  • 35
  • 34
  • 27
  • 26
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Grid computing e cloud computing: análise dos impactos sociais, ambientais e econômicos da colaboração por meio do compartilhamento de recursos computacionais / Grid Computing and Cloud Computing: analysis of the social,environmental and economic impacts of the collaboration through the resources sharing.

Silva, Diogo Cortiz da 01 October 2009 (has links)
Made available in DSpace on 2016-04-29T14:23:52Z (GMT). No. of bitstreams: 1 Diogo Cortiz Silva.pdf: 1671903 bytes, checksum: ee5719b8fbbb5e1d9a7de4b35b463f43 (MD5) Previous issue date: 2009-10-01 / This research debates the excess of worldwide available computational resources with exceeded processing capacity and also how the utilization of the sharing and collaboration concepts influence the integration of those devices to constitute an economic environment with high processing capacity. Currently, it is possible to find a great amount of personal computers, servers and others devices that show high level of idleness, while they could be being used for another purpose, once there are many scientific researches, collaborative projects and digital inclusion programs that are short of resources to reach theirs objectives. The Grid Computing technology was conceived as an alternative to integrate geographically distributed resources pertaining to different domains, enabling a decentralized computational environment. The main objective of this research is to analyze how this technology can generate benefits to the social, environment and economic contexts. In the social approach, Grid Computing stimulates the collaboration and the sharing of computational resources and applications, as well as providing features that are very useful for data transparency between many domains. Those characteristics are also important for the scientific inclusion. The first Case Study approaches the importance of Grid Computing for the collaborative tasks found in the scientific project of the Large Hadron Collider (LHC), which allowed many research institutions and universities around the world to build a shared computational environment of large scale for processing the data generated by LHC. In the environment context, this technology also presents some characteristics to make the computational resources more energy efficient increasing the use of its computational capacities. The second Case Study analyzes the data related to the amount of personal computers connected in the Internet and how to implement Grid Computing based on the Volunteer Computing model to make those computers more productive with no relevant impact in the energy consumption. This research also highlights the synergy between Grid Computing and Cloud Computing, its financial advantages and the generation of new business models based on the commercialization of platform and software as a service in the Internet. The third Case Study analyzes a Cloud Computing model that delivers computational resources (such as a whole server) as a service, enabling a scenario where companies and people could contract a computational environment with a quick provisioning with no need to purchase equipments and to invest in implementation projects. Finally, it is possible to appoint both technologies as relevant trends for the coming years, which can be an influence to generate new software models, platforms and services focused in the Internet / Esta dissertação discute o excesso de recursos computacionais disponíveis mundialmente com capacidade de processamento excedente e também debate como o emprego dos conceitos de compartilhamento e colaboração influenciam a integração desses dispositivos para constituir um ambiente econômico e com alta capacidade de processamento. Atualmente, é possível encontrar uma grande quantidade de computadores pessoais, servidores, entre outros dispositivos, que apresentam elevados níveis de ociosidade. Estes poderiam ser utilizados para outra finalidade, haja vista pesquisas científicas, projetos colaborativos e programas de inclusão digital carentes de recursos para atingirem seus objetivos. A tecnologia de Grid Computing, também chamada de Computação em Grade, foi concebida como uma alternativa para integrar recursos distribuídos geograficamente e pertencentes a diferentes domínios, habilitando um ambiente computacional abrangente e descentralizado. O objetivo desta dissertação é analisar como essa tecnologia, baseada no conceito de colaboração, pode gerar benefícios no contexto social, ambiental e econômico. No âmbito social, Grid Computing estimula o trabalho colaborativo e o compartilhamento de recursos computacionais e aplicacões, além de prover funcionalidades que auxiliam na transparência de dados entre diversos domínios. Essas características também são importantes para a inclusão científica. O primeiro Estudo de Caso aborda a importância de Grid Computing para o projeto científico do Superacelerador de Partículas (LHC). No contexto ambiental, essa tecnologia também apresenta características para tornar os recursos computacionais mais eficientes em relação ao consumo de energia através do aumento do uso de sua capacidade computacional. O segundo Estudo de Caso aborda dados em relação à quantidade de máquinas conectadas à Internet e como uma aplicação de Grid Computing, no modelo de Computação Voluntária, pode tornálas mais produtivas e, consequentemente, mais eficientes no consumo de recursos energéticos. Já no contexto econômico, é de importância destacar a sinergia existente entre Grid Computing e Cloud Computing, as suas vantagens financeiras e a geração de novos modelos de negócios através da comercialização de plataformas e softwares como serviços na Internet, e não mais como produtos. O terceiro Estudo de Caso aborda um modelo de Cloud Computing que disponibiliza recursos computacionais em forma de serviços, permitindo que empresas e pessoas físicas possam contratar um ambiente computacional de rápido provisionamento, sem a necessidade de adquirir equipamentos e investir em projetos de implementação. Por fim, ambas as tecnologias são apontadas como grandes tendências para os próximos anos, as quais influenciarão a geração de novos modelos de softwares, plataformas e serviços voltados à Internet
152

Intergiciels pour applications distribuées sur réseaux dynamiques

Mahéo, Yves 21 April 2011 (has links) (PDF)
Les réseaux cibles des applications distribuées ont connu une évolution significative ces dernières années, faisant apparaître un dynamisme croissant. Une première caractéristique des réseaux dynamiques est la volatilité, qui implique que certaines machines du réseau peuvent être amenées à ne plus participer à l'application, de façon temporaire ou définitive. Une autre caractéristique est apparue avec l'avènement de l'informatique mobile : dans un contexte où les machines sont mobiles et communiquent par radio, la portée limitée des transmissions induit de fréquents changements de topologie du réseau. Nos travaux concernent deux catégories de réseaux dynamiques. Dans un premier temps, nous nous sommes intéressés aux applications relevant du Grid Computing et plus particulièrement aux applications parallèles ciblant des grappes non dédiées, c'est-à-dire à des ensembles de stations de travail hétérogènes banalisées reliées par des réseaux d'interconnexion eux aussi banalisés, offrant donc des performances variables. Dans un deuxième temps, nous avons considéré des réseaux cibles de l'informatique ambiante. Nous avons en particulier étudié les réseaux mobiles ad hoc discontinus, c'est-à-dire des réseaux formés spontanément à partir de machines mobiles communiquant par radio directement entre elles, sans passer par une infrastructure fixe, et dont la topologie est telle qu'ils ne se présentent pas sous la forme d'une seule composante connexe mais plutôt d'un ensemble d'îlots de communication distincts. Pour faciliter le développement et l'exploitation des applications distribuées sur réseaux dynamiques, il apparaît utile de s'appuyer sur des paradigmes de programmation de haut niveau tel que ceux mis en avant dans l'approche orientée composants et l'approche orientée services. Ces approches permettent notamment un découplage entre les entités de l'application, facilitant la gestion de la complexité du développement et du déploiement des applications dans un environnement dynamique. La plupart des technologies de composants et de services ont été conçues pour des réseaux stables et ne conviennent généralement pas aux applications sur réseaux dynamiques. Les travaux que nous avons menés ont eu pour objectif de faciliter l'exploitation des composants et services dans un contexte dynamique. Nous nous sommes surtout focalisés sur le support à l'exécution des applications bâties à partir de composants et services, ce support prenant la forme d'un intergiciel, c'est-à-dire d'un ensemble de services logiciels construits au-dessus des systèmes d'exploitation et des protocoles de communication, et invoqués par les composants de l'application. Nos contributions sont présentées à travers trois projets principaux : le projet Concerto, portant sur la définition d'un modèle de composants parallèles associé à un intergiciel pour des applications devant être déployées sur des grappes de stations de travail banalisées ; le projet Cubik, étendant le modèle de composants Fractal et proposant un support pour le déploiement et l'exécution de composants ubiquitaires pour réseaux dynamiques ; et le projet Sarah, s'attachant à la construction d'une plate-forme à services bâtie au dessus d'un protocole de communication adapté aux réseaux mobiles ad hoc discontinus.
153

Contribution to the management of large scale platforms: the Diet experience

Caron, Eddy 06 October 2010 (has links) (PDF)
10 ans. 10 ans de recherches autour du calcul haute performance dans des environnements distribués. Et tout au long de ces années, le développement d'un intergiciel appelé DIET comme liant de ces recherches. Aujourd'hui la naissance d'une start'up autour de DIET, offre à cet intergiciel une autre vie. Ce tournant me donne alors l'occasion de proposer une vision que j'espère complète de cette aventure. A travers l'expérience de DIET, il sera intéressant d'évoquer les problèmes de recherche inhérents au développement complet d'un intergiciel de grille et de Cloud pour le calcul haute performance. Les aspects d'interoperabilités seront tout d'abord évoqués au travers des efforts de standardisation du GridRPC, et nous verrons comment DIET répond au problème de la localisation de ressources. Le problème de l'extensibilité sera ensuite traité au travers de l'architecture proposée. Nous verrons ensuite la réponse faite pour la découverte de services qui partant d'un besoin de notre intergiciel débouchera sur une solution générique. Ces premiers travaux évoqués se focalise côté client. Côté serveur nous évoquerons la solution mise en place pour la gestion des ressources. L'étape suivante sera de s'intéresser au déploiement et à la planification de ce déploiement. Conformément à notre objectif initial de fournir un outil complet, nous aborderons ensuite les problèmes liés à la gestion de données. Nous mettrons ensuite en lumière un des points forts de DIET qui est la réponse de ce dernier aux problèmes d'ordonnancement sur des environnements hétérogènes. Ce qui nous conduira jusqu'à la gestion des workflows dans notre intergiciel. Enfin pour conclure je vous présenterai différents cas d'utilisation de DIET sur des applications réelles et variées dont la plateforme du projet Décrypthon qui utilise notre intergiciel dans un cadre de production.
154

Nomadic migration : a service environment for autonomic computing on the Grid

Lanfermann, Gerd January 2002 (has links)
In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. <br /> Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen.<br /> Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung.<br /> Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen. / In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. <br /> <br /> A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. <br /> <br /> The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment.
155

Resource management for data streaming applications

Agarwalla, Bikash Kumar 07 July 2010 (has links)
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
156

Performance Modeling Based Scheduling And Rescheduling Of Parallel Applications On Computational Grids

Sanjay, H A 10 1900 (has links)
As computational grids have become popular and ubiquitous, users have access to large number and different types of geographically distributed grid resources. Many computational grid frameworks are composed of multiple distributed sites with each site consisting of one or more dedicated or non-dedicated clusters. Jobs submitted to a grid are handled by a matascheduler which interacts with the local schedulers of the clusters for scheduling jobs to the individual clusters. Computational grids have been found to be powerful research-beds for execution of various kinds of parallel applications. When a parallel application is submitted to a grid, the metascheduler has to choose a set of resources from a cluster for application execution. To select the best set of resources for application execution, it is important to determine the performance of the application. Accurate performance estimates of an application is essential in assisting a grid meta scheduler to efficiently schedule user jobs. Thus models that predict execution times of parallel applications on a set of resources and a search procedure (scheduling strategy) which selects the best set of machines within a cluster for application execution are of importance for enabling the parallel applications on grids. For efficient execution of large scientific parallel applications consisting of multiple phases, performance models of the individual phases should be obtained. Efficient rescheduling strategies that can use the per-phase models to adapt the parallel applications to application and resource dynamics are necessary for maintaining high performance of the applications on grids. A practical and robust grid computing infrastructure that integrates components related to application and resource monitoring, performance modeling, scheduling and rescheduling techniques, is highly essential for large-scale deployment and high performance of scientific applications on grid systems and hence for fostering high performance computing. This thesis focuses on developing performance models for predicting execution times of parallel problems/subproblems on dedicated and non-dedicated grid resources. The thesis also constructs robust scheduling and rescheduling strategies in a grid metascheduler that can use the performance models for efficient execution of large scientific parallel applications on dynamic grids. Finally, the thesis builds a practical and robust grid middleware infrastructure which integrates components related to performance modeling, scheduling and rescheduling, monitoring and migration frameworks for large-scale deployment and use of high performance applications on grids. The thesis consists of four main components. In the first part of the thesis, we have developed a comprehensive set of performance modeling strategies to predict the execution times of tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. The main purpose of our prediction strategies is to aid grid metaschedulers in making scheduling decisions. Our performance modeling strategies, based on linear regression, can deal with non-dedicated systems where the loads can change during application executions. Our models do not require detailed knowledge and instrumentation of the applications and can be constructed without the involvement of application developers. The strategies are intended for rapid and large scale deployment of parallel applications on non-dedicated grid systems. We have evaluated our strategies on 8, 16, 24 and 32-node clusters with random loads and load traces from a grid system. Our performance modeling strategies gave less than 30% average percentage prediction errors in all cases, which is reasonable for non-dedicated systems. We also found that scheduling based on the predictions by our strategies will result in perfect scheduling in many cases. For modeling large-scale scientific applications, we use execution profiles and automatic program analysis, and manual analysis of significant portions of the application’s code to identify the different phases of applications. We then adopt our performance modeling strategies to predict execution times for the different phases of the tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. Our experiments show that using combinations of performance models of the phases give 18% – 70% more accurate predictions than using single performance models for the applications. In the second part of the thesis, we have devised, evaluated and compared algorithms for scheduling tightly-coupled parallel applications on multi-cluster grids. Our algorithms use performance models that predict the execution times of parallel applications, for evaluations of candidate schedules. In this work, we propose a novel algorithm called Box Elimination (BE) that searches a space of performance model parameters to determine efficient schedules. By eliminating large search space regions containing poorer solutions at each step and searching high quality solutions, our algorithm is able to generate efficient schedules within few seconds for even clusters of 512 processors. By means of large number of real and simulation experiment, we compared our algorithm with popular optimization techniques. We show that our algorithm generates up to 80% more efficient schedules than other algorithms and the resulting execution times are more robust against performance modeling errors. The third part of the thesis deals with policies for rescheduling long-running multi-phase parallel applications in response to application and resource dynamics. In this work, we use our performance modeling and scheduling strategies to derive rescheduling plans for executing multi-phase parallel applications on grids. A rescheduling plan consists of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. We have developed three algorithms, namely an incremental algorithm, a divide-and-conquer algorithm and a genetic algorithm, for deriving a rescheduling plan for a parallel application execution. We have also developed an algorithm that uses rescheduling plans derived on different clusters to form a single coherent rescheduling plan for application execution on a grid consisting of multiple clusters. The rescheduling plans generated by our algorithms are highly efficient leading to application execution times that are higher than the execution times corresponding to brute force method by less than 10%. We also find that rescheduling in response to changing application and resource dynamics, using the rescheduling plans for multi-cluster grids generated by our algorithms, give much lesser execution times when compared to executions of the applications on a single schedule throughout application execution. In the final part of the thesis, we have developed a practical grid middleware framework called MerITA (Middleware for Performance Improvement of Tightly Coupled Parallel Applications on Grids), a system for effective execution of tightly-coupled parallel applications on multi-cluster grids consisting of dedicated or non-dedicated, interactive or batch systems. The framework brings together performance modeling for automatically determining the characteristics of parallel applications, scheduling strategies that use the performance models for efficient mapping of applications to resources, rescheduling policies for determining the points in application execution when executing applications can be rescheduled to different sets of resources to obtain performance improvement and a check-pointing library for enabling rescheduling.
157

Business and the grid : economic and transparent utilization of virtual resources /

Weishäupl, Thomas. January 2006 (has links)
Univ., Diss.--Wien, 2006.
158

Simulations hydrauliques d'haute performance dans la Grille avec Java et ProActive

Peretti-Pezzi, Guilherme 15 December 2011 (has links) (PDF)
L'optimisation de la distribution de l'eau est un enjeu crucial qui a déjà été ciblé par de nombreux outils de modélisation. Des modèles utiles, implémentés il y a des décennies, ont besoin d'évoluer vers des formalismes et des environnements informatiques plus récents. Cette thèse présente la refonte d'un ancien logiciel de simulation hydraulique (IRMA) écrit en FORTRAN, qui a été utilisé depuis plus de 30 ans par la Société du Canal de Provence, afin de concevoir et maintenir les réseaux de distribution d'eau. IRMA a été développé visant principalement pour le traitement des réseaux d'irrigation - en utilisant le modèle probabiliste d'estimation de la demande de Clément - et il permet aujourd'hui de gérer plus de 6.000 km de réseaux d'eau sous pression. L'augmentation de la complexité et de la taille des réseaux met en évidence le besoin de moderniser IRMA et de le réécrire dans un langage plus actuel (Java). Cette thèse présente le modèle de simulation implémenté dans IRMA, y compris les équations de perte de charge, les méthodes de linéarisation, les algorithmes d'analyse de la topologie, la modélisation des équipements et la construction du système linéaire. Quelques nouveaux types de simulation sont présentés: la demande en pointe avec une estimation probabiliste de la consommation (débit de Clément), le dimensionnement de pompe (caractéristiques indicées), l'optimisation des diamètres des tuyaux, et la variation de consommation en fonction de la pression. La nouvelle solution adoptée pour résoudre le système linéaire est décrite et une comparaison avec les solveurs existants en Java est présentée. La validation des résultats est réalisée d'abord avec une comparaison entre les résultats obtenus avec l'ancienne version FORTRAN et la nouvelle solution, pour tous les réseaux maintenus par la Société du Canal de Provence. Une deuxième validation est effectuée en comparant des résultats obtenus à partir d'un outil de simulation standard et bien connu (EPANET). Concernant les performances de la nouvelle solution, des mesures séquentielles de temps sont présentées afin de les comparer avec l'ancienne version FORTRAN. Enfin, deux cas d'utilisation sont présentés afin de démontrer la capacité d'exécuter des simulations distribuées dans une infrastructure de grille, utilisant la solution ProActive. La nouvelle solution a déjà été déployée dans un environnement de production et démontre clairement son efficacité avec une réduction significative du temps de calcul, une amélioration de la qualité des résultats et une intégration facilitée dans le système d'information de la Société du Canal de Provence, notamment la base de données spatiales.
159

Efficient large electromagnetic simulation based on hybrid TLM and modal approach on grid computing and supercomputer / Parallélisation, déploiement et adaptation automatique de la simulation électromagnétique sur une grille de calcul

Alexandru, Mihai 14 December 2012 (has links)
Dans le contexte des Sciences de l’Information et de la Technologie, un des challenges est de créer des systèmes de plus en plus petits embarquant de plus en plus d’intelligence au niveau matériel et logiciel avec des architectures communicantes de plus en plus complexes. Ceci nécessite des méthodologies robustes de conception afin de réduire le cycle de développement et la phase de prototypage. Ainsi, la conception et l’optimisation de la couche physique de communication est primordiale. La complexité de ces systèmes rend difficile leur optimisation notamment à cause de l’explosion du nombre des paramètres inconnus. Les méthodes et outils développés ces dernières années seront à terme inadéquats pour traiter les problèmes qui nous attendent. Par exemple, la propagation des ondes dans une cabine d’avion à partir des capteurs ou même d’une antenne, vers le poste de pilotage est grandement affectée par la présence de la structure métallique des sièges à l’intérieur de la cabine, voir les passagers. Il faut, donc, absolument prendre en compte cette perturbation pour prédire correctement le bilan de puissance entre l’antenne et un possible récepteur. Ces travaux de recherche portent sur les aspects théoriques et de mise en oeuvre pratique afin de proposer des outils informatiques pour le calcul rigoureux de la réflexion des champs électromagnétiques à l’intérieur de très grandes structures . Ce calcul implique la solution numérique de très grands systèmes inaccessibles par des ressources traditionnelles. La solution sera basée sur une grille de calcul et un supercalculateur. La modélisation électromagnétique des structures surdimensionnées par plusieurs méthodes numériques utilisant des nouvelles ressources informatiques, hardware et software, pour dérouler des calculs performants, représente le but de ce travail. La modélisation numérique est basée sur une approche hybride qui combine la méthode Transmission-Line Matrix (TLM) et l’approche modale. La TLM est appliquée aux volumes homogènes, tandis que l’approche modale est utilisée pour décrire les structures planaires complexes. Afin d’accélérer la simulation, une implémentation parallèle de l’algorithme TLM dans le contexte du paradigme de calcul distribué est proposé. Le sous-domaine de la structure qui est discrétisé avec la TLM est divisé en plusieurs parties appelées tâches, chacune étant calculée en parallèle par des processeurs différents. Pour accomplir le travail, les tâches communiquent entre elles au cours de la simulation par une librairie d’échange de messages. Une extension de l’approche modale avec plusieurs modes différents a été développée par l’augmentation de la complexité des structures planaires. Les résultats démontrent les avantages de la grille de calcul combinée avec l’approche hybride pour résoudre des grandes structures électriques, en faisant correspondre la taille du problème avec le nombre de ressources de calcul utilisées. L’étude met en évidence le rôle du schéma de parallélisation, cluster versus grille, par rapport à la taille du problème et à sa répartition. En outre, un modèle de prédiction a été développé pour déterminer les performances du calcul sur la grille, basé sur une approche hybride qui combine une prédiction issue d’un historique d’expériences avec une prédiction dérivée du profil de l’application. Les valeurs prédites sont en bon accord avec les valeurs mesurées. L’analyse des performances de simulation a permis d’extraire des règles pratiques pour l’estimation des ressources nécessaires pour un problème donné. En utilisant tous ces outils, la propagation du champ électromagnétique à l’intérieur d’une structure surdimensionnée complexe, telle qu’une cabine d’avion, a été effectuée sur la grille et également sur le supercalculateur. Les avantages et les inconvénients des deux environnements sont discutés. / In the context of Information Communications Technology (ICT), the major challenge is to create systems increasingly small, boarding more and more intelligence, hardware and software, including complex communicating architectures. This requires robust design methodologies to reduce the development cycle and prototyping phase. Thus, the design and optimization of physical layer communication is paramount. The complexity of these systems makes them difficult to optimize, because of the explosion in the number of unknown parameters. The methods and tools developed in past years will be eventually inadequate to address problems that lie ahead. Communicating objects will be very often integrated into cluttered environments with all kinds of metal structures and dielectric larger or smaller sizes compared to the wavelength. The designer must anticipate the presence of such barriers in the propagation channel to establish properly link budgets and an optimal design of the communicating object. For example, the wave propagation in an airplane cabin from sensors or even an antenna, towards the cockpit is greatly affected by the presence of the metal structure of the seats inside the cabin or even the passengers. So, we must absolutely take into account this perturbation to predict correctly the power balance between the antenna and a possible receiver. More generally, this topic will address the theoretical and computational electromagnetics in order to propose an implementation of informatics tools for the rigorous calculation of electromagnetic scattering inside very large structures or radiation antenna placed near oversized objects. This calculation involves the numerical solution of very large systems inaccessible by traditional resources. The solution will be based on grid computing and supercomputers. Electromagnetic modeling of oversized structures by means of different numerical methods, using new resources (hardware and software) to realize yet more performant calculations, is the aim of this work. The numerical modeling is based on a hybrid approach which combines Transmission-Line Matrix (TLM) and the mode matching methods. The former is applied to homogeneous volumes while the latter is used to describe complex planar structures. In order to accelerate the simulation, a parallel implementation of the TLM algorithm in the context of distributed computing paradigm is proposed. The subdomain of the structure which is discretized upon TLM is divided into several parts called tasks, each one being computed in parallel by different processors. To achieve this, the tasks communicate between them during the simulation by a message passing library. An extension of the modal approach to various modes has been developped by increasing the complexity of the planar structures. The results prove the benefits of the combined grid computing and hybrid approach to solve electrically large structures, by matching the size of the problem with the number of computing resources used. The study highlights the role of parallelization scheme, cluster versus grid, with respect to the size of the problem and its repartition. Moreover, a prediction model for the computing performances on grid, based on a hybrid approach that combines a historic-based prediction and an application profile-based prediction, has been developped. The predicted values are in good agreement with the measured values. The analysis of the simulation performances has allowed to extract practical rules for the estimation of the required resources for a given problem. Using all these tools, the propagation of the electromagnetic field inside a complex oversized structure such an airplane cabin, has been performed on grid and also on a supercomputer. The advantages and disadvantages of the two environments are discussed.
160

ProGrid: uma infra-estrutura de suporte a programação paralela em grades computacionais.

Costa, Paulo Vicente Capellotto 26 May 2003 (has links)
Made available in DSpace on 2016-06-02T19:05:18Z (GMT). No. of bitstreams: 1 DissPVCC.pdf: 3610389 bytes, checksum: 207fb73eb90d6ef70e9232d6b9d02a92 (MD5) Previous issue date: 2003-05-26 / Financiadora de Estudos e Projetos / The computational Grid concept allows resource sharing in large scale. This work introduces the ProGrid system, an architecture for computational Grids, whose communication and resource management infrastructure is used transparently by the applications. Unlike other grid approaches, this work relies on the use of proxy servers to perform additional communications and authentication procedures on behalf of client applications. The purpose of this mechanism is to enable parallel applications to be executed in geographically distributed environments interlinked by an open communication network, such as the Internet, meeting the security requisites desirable for computational grids. To reach such objectives, a generic architecture for ProGrid was developed, that is divided in a group services layers. This work was focused in the implementation of layers responsible by the secure communication and for the controlled sharing of available resources. / O conceito de grade computacional permite o compartilhamento de recursos computacionais em larga escala. Este trabalho apresenta o sistema ProGrid, uma arquitetura para Grades Computacionais, na qual a infra-estrutura de comunicação e o gerenciamento de recursos são usados transparentemente pelas aplicações. Diferentemente de outras grades, este trabalho utilizou uma abordagem baseada em servidores Proxy para realizar os processos adicionais de comunicação e autenticação em nome da aplicação cliente. O propósito deste mecanismo é habilitar a execução de aplicações paralelas em ambientes geograficamente distribuídos interconectados por um canal de comunicação aberto, como a Internet, atendendo os requisitos de segurança desejáveis nas Grades Computacionais. Para alcançar tais objetivos, desenvolveu-se uma arquitetura genérica para o ProGrid , que é dividida em um conjunto de camadas de serviços. Este trabalho focou-se na implementação das camadas responsáveis pela comunicação segura e pelo compartilhamento controlado dos recursos disponíveis.

Page generated in 0.0298 seconds