• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 216
  • 28
  • 24
  • 24
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 590
  • 140
  • 130
  • 110
  • 110
  • 93
  • 92
  • 69
  • 62
  • 62
  • 59
  • 59
  • 59
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Χρονοπρογραμματισμός και δρομολόγηση σε δίκτυα πλέγματος και δίκτυα δεδομένων

Κόκκινος, Παναγιώτης 05 January 2011 (has links)
Τα δίκτυα πλέγματος (grid networks) αποτελούνται από ένα σύνολο ισχυρών υπολογιστικών, αποθηκευτικών και άλλων πόρων. Οι πόροι αυτοί είναι συνήθως γεωγραφικά αλλά και διοικητικά διασκορπισμένοι και συνδέονται με ένα δίκτυο δεδομένων. Τα δίκτυα πλέγματος το τελευταίο καιρό έχουν αποκτήσει μία δυναμική, η οποία εντάσσεται μέσα σε ένα γενικότερο πλαίσιο, αυτό της κατανεμημένης επεξεργασίας και αποθήκευσης δεδομένων. Επιστήμονες, ερευνητές αλλά και απλοί χρήστες χρησιμοποιούν από κοινού τους κατανεμημένους πόρους για την εκτέλεση διεργασιών ή τη χρήση εφαρμογών, για τις οποίες δεν μπορούν να χρησιμοποιήσουν τους τοπικά διαθέσιμους υπολογιστές τους λόγω των περιορισμένων δυνατοτήτων τους. Στην παρούσα διδακτορική διατριβή εξετάζουμε ζητήματα που σχετίζονται με το χρονοπρογραμματισμό (scheduling) των διεργασιών στους διαθέσιμους πόρους, καθώς και με τη δρομολόγηση (routing) των δεδομένων που οι διεργασίες χρειάζονται. Εξετάζουμε τα ζητήματα αυτά είτε χωριστά, είτε σε συνδυασμό, μελετώντας έτσι τις αλληλεπιδράσεις τους. Αρχικά, προτείνουμε ένα πλαίσιο παροχής ποιότητας υπηρεσιών στα δίκτυα πλέγματος, το οποίο μπορεί να εγγυηθεί σε ένα χρήστη μία μέγιστη χρονική καθυστέρηση εκτέλεσης των διεργασιών του. Με τον τρόπο αυτό, ένας χρήστης μπορεί να επιλέξει με απόλυτη βεβαιότητα εκείνον τον υπολογιστικό πόρο που μπορεί να εκτελέσει τη διεργασία του πριν τη λήξη της προθεσμίας της. Το προτεινόμενο πλαίσιο δεν στηρίζεται στην εκ των προτέρων δέσμευση των υπολογιστικών πόρων, αλλά στο ότι οι χρήστες μπορούν να αυτό-περιορίσουν το ρυθμό δημιουργίας διεργασιών τους, ο οποίος συμφωνείται ξεχωριστά με κάθε πόρο κατά τη διάρκεια μίας φάσης εγγραφής τους. Πραγματοποιούμε έναν αριθμό πειραμάτων προσομοίωσης που αποδεικνύουν ότι το προτεινόμενο πλαίσιο μπορεί πράγματι να παρέχει στους χρήστες εγγυημένο μέγιστο χρόνο καθυστέρησης εκτέλεσης των διεργασιών τους, ενώ με τις κατάλληλες επεκτάσεις το πλαίσιο μπορεί να χρησιμοποιηθεί ακόμα και όταν το φορτίο των διεργασιών δεν είναι εκ των προτέρων γνωστό. Στη συνέχεια εξετάζουμε το πρόβλημα της ``Συγκέντρωσης Δεδομένων'' (ΣΔ), που εμφανίζεται όταν μία διεργασία χρειάζεται περισσότερα του ενός τμήματα δεδομένων να μεταφερθούν σε έναν υπολογιστικό πόρο, πριν η διεργασία ξεκινήσει την εκτέλεσή της σε αυτόν. Μελετάμε τα υπό-προβλήματα της επιλογής των αντιγράφων των δεδομένων, του χρονοπρογραμματισμού της διεργασίας και της δρομολόγησης των δεδομένων της και προτείνουμε έναν αριθμό πλαισίων ``Συγκέντρωσης Δεδομένων''. Μερικά πλαίσια εξετάζουν μόνο τις υπολογιστικές ή μόνο τις επικοινωνιακές απαιτήσεις των διεργασιών, ενώ άλλα εξετάζουν και τα δύο είδη απαιτήσεων. Επιπλέον, προτείνονται πλαίσια ``Συγκέντρωσης Δεδομένων'' τα οποία βασίζονται στην κατασκευή ελαχίστων γεννητικών δέντρων(Minimum Spanning Tree - MST), με σκοπό τη μείωση της συμφόρησης στο δίκτυο δεδομένων, που εμφανίζεται κατά την ταυτόχρονη μεταφορά των δεδομένων μίας διεργασίας. Στα πειράματα προσομοίωσης μας αξιολογούμε τα προτεινόμενα πλαίσια και δείχνουμε ότι αν η διαδικασία της ``Συγκέντρωση Δεδομένων'' πραγματοποιηθεί σωστά, τότε η απόδοση του δικτύου πλέγματος, όσον αφορά τη χρήση των πόρων και την εκτέλεση των διεργασιών, μπορεί να βελτιωθεί. Επιπλέον, ερευνούμε την εφαρμογή τεχνικών σύνοψης της πληροφορίας των χαρακτηριστικών των πόρων στα δίκτυα πλέγματος. Προτείνουμε ένα σύνολο μεθόδων και τελεστών σύνοψης, προσπαθώντας να μειώσουμε τον όγκο των πληροφοριών πόρων που μεταφέρονται πάνω από το δίκτυο, ενώ παράλληλα επιθυμούμε οι συνοπτικές πληροφορίες που παράγονται να βοηθούν το χρονοπρογραμματιστή να παίρνει αποδοτικές αποφάσεις ανάθεσης διεργασιών στους διαθέσιμους πόρους. Οι τεχνικές αυτές μπορούν να συνδυαστούν και με τις αντίστοιχες τεχνικές που εφαρμόζονται στα ιεραρχικά δίκτυα δεδομένων για τη δρομολόγηση, εξασφαλίζοντας έτσι τη διαλειτουργικότητα μεταξύ διαφορετικών δικτύων πλέγματος καθώς και το απόρρητο των πληροφοριών που ανήκουν σε διαφορετικούς παρόχους πόρων. Στα πειράματα προσομοίωσης μας χρησιμοποιούμε σαν μετρική της ποιότητας / αποδοτικότητας των αποφάσεων του χρονοπρογραμματιστή τον Stretch Factor (SF), που ορίζεται ως ο λόγος της μέσης καθυστέρησης εκτέλεσης των διεργασιών όταν αυτές χρονοπρογραμματίζονται με βάση ακριβείς πληροφορίες πόρων, προς τη μέση καθυστέρηση τους όταν χρησιμοποιούνται συνοπτικές πληροφορίες. Ακόμα, μετράμε τη συχνότητα με την οποία ο χρονοπρογραμματιστής ενημερώνεται για τις αλλαγές στην κατάσταση των πόρων καθώς και τον όγκο των πληροφοριών πόρων που μεταφέρονται. Μελετάμε, ακόμα, ζητήματα που προκύπτουν από την υλοποίηση αλγορίθμων χρονοπρογραμματισμού που έχουν αρχικά μελετηθεί σε περιβάλλοντα προσομοίωσης, σε πραγματικά συστήματα ενδιάμεσου λογισμικού (middleware) για δίκτυα πλέγματος, όπως το gLite. Το πρώτο ζήτημα που εξετάζουμε είναι το γεγονός ότι οι πληροφορίες που παρέχονται στους αλγορίθμους χρονοπρογραμματισμού στα συστήματα αυτά δεν είναι πάντα έγκυρες, ενώ το δεύτερο ζήτημα είναι ότι δεν υπάρχει ευελιξία στο διαμοιρασμό των πόρων μεταξύ διαφορετικών διεργασιών. Η μελέτη μας δείχνει ότι με απλές αλλαγές στους μηχανισμούς διαχείρισης διεργασιών ενός συστήματος ενδιάμεσου λογισμικού, αυτά αλλά και άλλα ζητήματα μπορούν να αντιμετωπιστούν, επιτυγχάνοντας σημαντικές βελτιώσεις στην απόδοση των δικτύων πλέγματος. Στα πλαίσια αυτά μάλιστα, εξετάζουμε τη χρήση της τεχνολογίας της εικονικοποίησης (virtualization). Υλοποιούμε και αξιολογούμε τους προτεινόμενους μηχανισμούς σε ένα μικρό δοκιμαστικό δίκτυο πλέγματος. Τέλος, προτείνουμε έναν αλγόριθμο πολλαπλών κριτηρίων για τη δρομολόγηση και ανάθεση μήκους κύματος υπό την παρουσία φυσικών εξασθενήσεων (Impairment-Aware Routing and Wavelength Assignment, IA-RWA) για οπτικά δίκτυα δεδομένων. Τα οπτικά δίκτυα είναι η δικτυακή τεχνολογία που χρησιμοποιείται σήμερα για τη διασύνδεση των υπολογιστικών και αποθηκευτικών πόρων των δικτύων πλέγματος, ενώ οι διάφορες φυσικές εξασθενήσεις τείνουν να μειώνουν την ποιότητα μετάδοσης (Quality of Transmission - QoT) των οπτικών σημάτων. Κύριο χαρακτηριστικό του προτεινόμενου αλγορίθμου είναι ότι υπολογίζει την ποιότητα μετάδοσης (Quality of Transmission - QoT) ενός υποψήφιου οπτικού μονοπατιού (lightpath) μη βασιζόμενο σε πραγματικές μετρήσεις ή εκτιμήσεις μέσω αναλυτικών μοντέλων των διαφόρων φυσικών εξασθενήσεων, αλλά μετρώντας τις αιτίες στις οποίες αυτά οφείλονται. Με τον τρόπο αυτό ο αλγόριθμος γίνεται πιο γενικός και εφαρμόσιμος σε διαφορετικές συνθήκες (μέθοδοι διαμόρφωσης του οπτικού σήματος, ρυθμοί μετάδοσης, τιμές διαφόρων φυσικών παραμέτρων, κ.α.). Τα πειράματα προσομοίωσης μας δείχνουν ότι ο προτεινόμενος αλγόριθμος μπορεί να εξυπηρετήσει τις περισσότερες δυναμικές αιτήσεις σύνδεσης, υπολογίζοντας γρήγορα, μονοπάτια με καλή ποιότητα μετάδοσης σήματος. Γενικά, η παρούσα διδακτορική διατριβή παρουσιάζει έναν αριθμό σημαντικών και καινοτόμων μεθόδων, πλαισίων και αλγορίθμων που αφορούν τα δίκτυα πλέγματος. Παράλληλα ωστόσο αποκαλύπτει το εύρος των ζητημάτων και ως ένα βαθμό και τις αλληλεπιδράσεις τους, που σχετίζονται με την αποδοτική λειτουργία των δικτύων πλέγματος, τα οποία απαιτούν τη σύνθεση και τη συνεργασία ερευνητών, μηχανικών και επιστημόνων από διάφορα πεδία. / Grid networks consist of several high capacity, computational, storage and other resources, which are geographically distributed and may belong to different administrative domains. These resources are usually connected through high capacity optical networks. The grid networks evolution follows the current trend of distributedly performed computation and storage. This trend provides several new possibilities to scientists, researchers and to simple users around the world, so as to use the shared resources for executing their tasks and running their applications. These operations are not always possible to perform in local, limited capacity, resources. In this thesis we study issues related to the scheduling of tasks and the routing of their datasets. We study these issues both separately and jointly, along with their interactions. Initially, we present a Quality of Service (QoS) framework for grids that guarantees to users an upper bound on the execution delay of their submitted tasks. Such delay guarantees imply that a user can choose, with absolute certainty, a resource to execute a task before its deadline expires. Our framework is not based on the advance reservation of resources, instead, the users follow a self constrained task generation pattern, which is agreed separately with each resource during a registration phase. We validate experimentally the proposed Quality of Service (QoS) framework for grids, verifying that it satisfies the delay guarantees promised to users. In addition, when the proposed extensions are used, the framework also provides delay guarantees without exact a-priori knowledge of the task workloads. Next, we examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. Data Consolidation arises when a task requests concurrently multiple pieces of data, possibly scattered throughout the grid network that have to be present at a selected site before the task's execution starts. In such a case, the scheduler must select the data replicas to be used, the site where these data will be gathered for the task to be executed, and the routing paths to be followed. We propose and experimentally evaluate several Data Consolidation schemes. Some consider only the computational or only the communication requirements of the tasks, while others consider both kinds of requirements. We also propose Data Consolidation (DC) schemes, which are based on Minimum Spanning Trees (MST) that route concurrently the datasets so as to reduce the congestion that may appear in the future, due to these transfers. In our simulation experiments we validate the proposed schemes and show that if the Data Consolidation operation is performed efficiently, then significant benefits can be achieved, in terms of the resources' utilization and task delay. We also consider the use of resource information aggregation in grid networks. We propose a number of aggregation schemes and operators for reducing the information exchanged in a grid network and used by the resource manager in order to make efficient scheduling decisions. These schemes can be integrated with the schemes utilized in hierarchical data networks for data routing, providing interoperability between different grid networks, while the sensitive or detailed information of resource providers is kept private. We perform a large number of experiments to evaluate the proposed aggregation schemes and the used operators. As a metric of the quality of the aggregated information we introduce the Stretch Factor (SF), defined as the ratio of the task delay when the task is scheduled using complete resource information over the task delay when an aggregation scheme is used. We also measure the number of resource information updates triggered by each aggregation scheme and the amount of resource information transferred. In addition, we are interested in the difficulties encountered and the solutions provided in order to develop and evaluate scheduling policies, initially implemented in a simulation environment, in the gLite grid middleware. We identify two important such implementation issues, namely the inaccuracy of the information provided to the scheduler by the information system, and the inflexibility in the sharing of a resource among different jobs. Our study indicates that simple changes in the gLite's scheduling procedures can solve these and other similar issues, yielding significant performance gains. We also investigate the use of the virtualization technology in the gLite middleware. We implement and evaluate the proposed mechanisms in a small gLite testbed. Finally, we propose a multicost impairment-aware routing and wavelength assignment (IA-RWA) algorithm in optical networks. In general, physical impairments tend to degrade the optical signal quality. Also, optical networks is the main networking technology used today for the interconnection of the grid's, computational and storage, resources around the world. The main characteristic of the proposed algorithm is that it calculates the quality of transmission (QoT) of a candidate lightpath by measuring several impairment-generating source parameters and not by using complex formulas to directly account for the effects of physical impairments. In this way, this approach is more generic and more easily applicable to different conditions (modulation formats, bit rates). Our results indicate that the proposed impairment-aware routing and wavelength assignment (IA-RWA) algorithm can efficiently serve the online traffic in an optical network and to guarantee the transmission quality of the found lightpaths, with low running times. In general, in this thesis we present several novel mechanisms and algorithms for grid networks. At the same time, this Thesis reveals the variety of the issues that relate to the efficient operation of the grid networks and their interdependencies. For handling all these issues the cooperation of researches, scientists and engineers from various fields, is required.
512

A hybrid peer-to-peer middleware plugin for an existing client/server massively multiplayer online game

Croucher, Darren Armstrong 04 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Massively Multiplayer Online Games are large virtual worlds co-inhabited by players over the Internet. As up to thousands of players can be simultaneously connected to the game, the server and network architectures are required to scale e ciently. The traditional client/server model results in a heavy nancial burden for operation of the server. Various alternative architectures have been proposed as a replacement for the traditional model, but the adoption of these alternatives are slow as they present their own set of challenges. The proposed hybrid system is based on many di erent architectures and peer-topeer concepts that were reviewed in the literature. It aims to provide a compromise for existing, commercially successful MMOGs to introduce peer-to-peer components into their systems with no requirement of modi cation to their server or client software. With the system's design presented, the middleware software is implemented and deployed in a real, controlled environment alongside an Ultima Online game server and its clients. The movement game mechanic was distributed amongst the peers while the others remained the responsiblity of the server. A number of performance experiments are performed to measure the e ects of the modi ed system over the original client/server system on bandwidth, latency, and hardware impact. The results revealed an increase in the server bandwidth usage by 35%, slave bandwidth usage by 17% and supernode bandwiwdth usage by 3111%. The latencies of distributed server mechanics were reduced by up to 94%, while the non-distributed latencies were increased by up to 6000%. These results suggested that a system with absolutely no modi cation to the server is unlikely to provide the desired bene ts. However, with 2 minor modi cations to the server, the middleware is able to reduce both server load and player latencies. The server bandwidth can be reduced by 39%, while the supernode's bandwidth is increased only by 1296%. The distributed latencies maintain their reduction while non-distributed latencies remain unchanged from the C/S system. / AFRIKAANSE OPSOMMING: Massiewe Multispeler Aanlyn Speletjies (MMAS) is groot virtuele w^erelde op die Internet wat bewoon word deur spelers. Aangesien duisende spelers gelyktydig kan inskakel op die speletjie word daar verwag van die bediener en netwerk argitektuur om e ektief te skaleer om die groot hoeveelhede spelers te kan hanteer. Die traditionele kli ent/bediener model lei tot 'n groot nansi ele las vir die operateur van die bediener. Verskeie alternatiewe argitekture is al voorgestel om die tradisionele model te vervang, maar die aanvaarding en in gebruik neem van hierdie alternatiewe (soos eweknie-netwerke) is 'n stadige proses met sy eie stel uitdagings. Die voorgestelde hibriede stelsel is gebaseer op baie verskillende argitektuur- en eweknie konsepte wat in die literatuur oorweeg is. Die doel is om 'n kompromie vir bestaande komersieel suksesvolle MMASs te verskaf om eweknie komponente te implementeer sonder om die die bediener- of kli ent sagteware aan te pas. Met hierdie stelsel se ontwerp word die middelware sagteware ge mplementeer en gebruik in 'n regte, dog gekontroleerde omgewing, tesame met 'n Ultima Online bediener en sy kli ente. Die beweging speletjie meganisme word versprei onder die eweknie netwerk en die ander meganismes bly die verantwoordelikheid van die bediener. 'n Aantal eksperimente is ingespan om die e ek van die hibriede stelsel te meet op die oorspronklike kli ent/bediener stelsel, in terme van bandwydte, vertraging en impak op hardeware. Die resultate toon 'n toename van 35% in bediener-, 17% in slaaf-, en 3111% in supernodus bandwydte gebruik. Die vertraging van verspreide bediener meganismes neem af met tot 94%, terwyl onverspreide vertragings toeneem met tot 6000%. Hierdie resultate wys dat 'n stelsel wat geen aanpassing maak aan die bediener sagteware onwaarskynlik die gewenste voordele sal lewer. Deur egter 2 klein aanpassings toe te laat tot die bediener, is dit moontlik vir die hibriede stelsel om data las van die bediener en die speler se vertraging te verminder. Die bediener bandwydte kan met 39% verminder word, terwyl die supernodus bandwydte slegs met 1296% toeneem. Die verpreide vertragings handhaaf hul vermindering, terwyl die onverspreide vertragings onveranderd bly van die C/S stelsel.
513

UMA ARQUITETURA PARA A UTILIZAÇÃO DE COMPUTAÇÃO NAS NUVENS NOS AMBIENTES DE COMPUTAÇÃO PERVASIVA / AN ARCHITECTURE FOR THE USE OF CLOUD COMPUTING IN PERVASIVE COMPUTING ENVIRONMENTS

Pereira, Henrique Gabriel Gularte 22 March 2012 (has links)
The modern world can be characterized by the quick proliferation of mobile devices and by the intense use of computers on our daily lives. Both pervasive computing and cloud computing have appeared as very promissing trends, but for pervasive computing to reach mainstream, many paradigm changes are needed on the current computing environments. Some of the problems found in pervasive camputing are not from a technical order, but due to a lack of standards and models to allow devices to interoperate and the problems related to the creation of low cost computing environments. Pervasive environments are marked by having sudden and frequent changes, making it necessary to think of a way to manage context information. This work aims at showing a solution that will allow the creation of pervasive computing environments using resources available in the cloud computing paradigm and taking in consideration requisites like the ability of mixing heterogenous computing devices running on the least possible amount of resources and using ontologies for context information representation and management. In this context, an architecture for the development of pervasive computing environments, an study case in a residencial cenario and an analysis of the results obtained with the proposed architecture are presented. / O mundo atual é caracterizado pela rápida proliferação de dispositivos móveis e pelo intenso uso de computadores no nosso cotidiano. Tanto a computação pervasiva quanto a computação em nuvem têm surgido como uma tendência muito promissora. Porém, para que a computação pervasiva se consolide são necessárias algumas mudanças de paradigma nos ambientes atuais da computação. Boa parte dos problemas encontrados hoje em dia na computação pervasiva não são de ordem técnica, mas sim a falta de padrões e modelos para permitir a interoperabilidade entre os dispositivos e a criação de ambientes computacionais de baixo custo. Os ambientes de computação pervasiva são caracterizados por mudanças rápidas e frequentes, sendo necessária a existência de alguma maneira para gerenciar essa informação de contexto. Essa dissertação visa apresentar uma solução para permitir a criação de ambientes de computação pervasiva utilizando serviços disponíveis no paradigma da computação em nuvem levando em consideração requisitos como a capacidade de trabalhar com dispositivos computacionais heterogêneos consumindo o mínimo possível de recursos e utilizando ontologias para a representação de informação de contexto. Nesse contexto, são apresentadas uma proposta de arquitetura para ambientes pervasivos, um estudo de caso em um cenário residencial e apresentados resultados e conclusões sobre a arquitetura proposta. Os resultados alcançados no estudo de caso permitiram a implementação de um ambiente pervasivo utilizando recursos computacionais disponíveis na nuvem e atingind os objetivos propostos no trabalho.
514

Support intergiciel pour la conception et le déploiement adaptatifs fiables, application aux bâtiments intelligents / Middleware support for adaptive reliable design and deployment, application to building automation

Sylla, Adja Ndeye 18 December 2017 (has links)
Dans le contexte de l’informatique pervasive et de l’internet des objets, les systèmes sonthétérogènes, distribués et adaptatifs (p. ex., systèmes de gestion des transports, bâtimentsintelligents). La conception et le déploiement de ces systèmes sont rendus difficiles par leurnature hétérogène et distribuée mais aussi le risque de décisions d’adaptation conflictuelleset d’inconsistances à l’exécution. Les inconsistances sont causées par des pannes matériellesou des erreurs de communication. Elles surviennent lorsque des actions correspondant auxdécisions d’adaptation sont supposées être effectuées alors qu’elles ne le sont pas.Cette thèse propose un support intergiciel, appelé SICODAF, pour la conception et ledéploiement de systèmes adaptatifs fiables. SICODAF combine une fiabilité comportementale(absence de décisions conflictuelles) au moyen de systèmes de transitions et une fiabilitéd’exécution (absence d’inconsistances) à l’aide d’un intergiciel transactionnel. SICODAF estbasé sur le calcul autonomique. Il permet de concevoir et de déployer un système adaptatifsous la forme d’une boucle autonomique qui est constituée d’une couche d’abstraction, d’unmécanisme d’exécution transactionnelle et d’un contrôleur. SICODAF supporte trois typesde contrôleurs (basés sur des règles, sur la théorie du contrôle continu ou discret). Il permetégalement la reconfiguration d’une boucle, afin de gérer les changements d’objectifs quisurviennent dans le système considéré, et l’intégration d’un système de détection de pannesmatérielles. Enfin, SICODAF permet la conception de boucles multiples pour des systèmesqui sont constitués de nombreuses entités ou qui requièrent des contrôleurs de types différents.Ces boucles peuvent être combinées en parallèle, coordonnées ou hiérarchiques.SICODAF a été mis en oeuvre à l’aide de l’intergiciel transactionnel LINC, de l’environnementd’abstraction PUTUTU et du langage Heptagon/BZR qui est basé sur des systèmesde transitions. SICODAF a été également évalué à l’aide de trois études de cas. / In the context of pervasive computing and internet of things, systems are heterogeneous,distributed and adaptive (e.g., transport management systems, building automation). Thedesign and the deployment of these systems are made difficult by their heterogeneous anddistributed nature but also by the risk of conflicting adaptation decisions and inconsistenciesat runtime. Inconsistencies are caused by hardware failures or communication errors. Theyoccur when actions corresponding to the adaptation decisions are assumed to be performedbut are not done.This thesis proposes a middleware support, called SICODAF, for the design and thedeployment of reliable adaptive systems. SICODAF combines a behavioral reliability (absenceof conflicting decisions) by means of transitions systems and an execution reliability(absence of inconsistencies) through a transactional middleware. SICODAF is based on autonomiccomputing. It allows to design and deploy an adaptive system in the form of anautonomic loop which consists of an abstraction layer, a transactional execution mechanismand a controller. SICODAF supports three types of controllers (based on rules, on continuousor discrete control theory). SICODAF also allows for loop reconfiguration, to dealwith changing objectives in the considered system, and the integration of a hardware failuredetection system. Finally, SICODAF allows for the design of multiple loops for systems thatconsist of a high number of entities or that require controllers of different types. These loopscan be combined in parallel, coordinated or hierarchical.SICODAF was implemented using the transactional middleware LINC, the abstractionenvironment PUTUTU and the language Heptagon/BZR that is based on transitionssystems. SICODAF was also evaluated using three case studies.
515

MPI sobre MOM para suportar log de mensagens pessimista remoto / MPI over MOM to support remote pessimistic message logging

Machado, Caciano dos Santos January 2010 (has links)
O aumento crescente no número de processadores das arquiteturas paralelas que estão no topo dos rankings de desempenho, apesar de permitir uma maior capacidade de processamento, também traz consigo um aumento na taxa de falhas diretamente proporcional ao número de processadores. Atualmente, as técnicas de tolerância a falhas com recuperação retroativa são as mais empregadas em aplicações MPI, principalmente a técnica de checkpoint coordenado. No entanto, previsões afirmam que essa última técnica será inadequada para as arquiteturas emergentes. Em contrapartida, as técnicas de log de mensagens possuem características que as tornam mais apropriadas no novo cenário que se estabelece. O presente trabalho consiste em uma proposta de log de mensagens pessimista remoto com checkpoint não-coordenado e a avaliação de desempenho da comunicação MPI sobre Publish/Subscriber no qual se baseia o log de mensagens. O trabalho compreende: um estudo das técnicas de tolerância a falhas mais empregadas em ambientes de alto desempenho e a motivação para a escolha dessa variante de log de mensagens; a proposta de log de mensagens; uma implementação de comunicação Open MPI sobre OpenAMQ e sua respectiva avaliação de desempenho com comunicação tradicional TCP/IP e com o log de mensagens pessimista local da distribuição do Open MPI. Os benchmarks utilizados foram o NetPIPE, o NAS Parallel Benchmarks e a aplicação Virginia Hydrodynamics (VH-1). / The growing number of processors in parallel architectures at the top of performance rankings allows a higher processing capacity. However, it also brings an increase in the fault rate which is directly proportional to the number of processors. Nowadays, coordinated checkpoint is the most widely used rollback technique for system recovery in the occurrence of faults in MPI applications. Nevertheless, projections point that this technique will be inappropriate for the emerging architectures. On the other hand, message logging seems to be more appropriate to this new scenario. This work consists in a proposal of pessimistic message logging (remote based) with non-coordinated checkpoint and the performance evaluation of an MPI communication mechanism that works over Publish/Subscriber channels in which the proposed message logging is based. The work is organized as following: an study of fault tolerant techniques used in HPC and the motivation for choosing this variant of message logging; a message logging proposal; an implementation of Open MPI communication over OpenAMQ; performance evaluation and comparision with the tradicional TCP/IP communication and a pessimistic message logging (sender based) from Open MPI distribution. The benchmark set is composed of NetPIPE, NAS Parallel Benchmarks and Virginia Hydrodynamics (VH-1).
516

A technology reference model for client/server software development

Nienaber, R. C. (Rita Charlotte) 06 1900 (has links)
In today's highly competitive global economy, information resources representing enterprise-wide information are essential to the survival of an organization. The development of and increase in the use of personal computers and data communication networks are supporting or, in many cases, replacing the traditional computer mainstay of corporations. The client/server model incorporates mainframe programming with desktop applications on personal computers. The aim of the research is to compile a technology model for the development of client/server software. A comprehensive overview of the individual components of the client/server system is given. The different methodologies, tools and techniques that can be used are reviewed, as well as client/server-specific design issues. The research is intended to create a road map in the form of a Technology Reference Model for Client/Server Software Development. / Computing / M. Sc. (Information Systems)
517

MPI sobre MOM para suportar log de mensagens pessimista remoto / MPI over MOM to support remote pessimistic message logging

Machado, Caciano dos Santos January 2010 (has links)
O aumento crescente no número de processadores das arquiteturas paralelas que estão no topo dos rankings de desempenho, apesar de permitir uma maior capacidade de processamento, também traz consigo um aumento na taxa de falhas diretamente proporcional ao número de processadores. Atualmente, as técnicas de tolerância a falhas com recuperação retroativa são as mais empregadas em aplicações MPI, principalmente a técnica de checkpoint coordenado. No entanto, previsões afirmam que essa última técnica será inadequada para as arquiteturas emergentes. Em contrapartida, as técnicas de log de mensagens possuem características que as tornam mais apropriadas no novo cenário que se estabelece. O presente trabalho consiste em uma proposta de log de mensagens pessimista remoto com checkpoint não-coordenado e a avaliação de desempenho da comunicação MPI sobre Publish/Subscriber no qual se baseia o log de mensagens. O trabalho compreende: um estudo das técnicas de tolerância a falhas mais empregadas em ambientes de alto desempenho e a motivação para a escolha dessa variante de log de mensagens; a proposta de log de mensagens; uma implementação de comunicação Open MPI sobre OpenAMQ e sua respectiva avaliação de desempenho com comunicação tradicional TCP/IP e com o log de mensagens pessimista local da distribuição do Open MPI. Os benchmarks utilizados foram o NetPIPE, o NAS Parallel Benchmarks e a aplicação Virginia Hydrodynamics (VH-1). / The growing number of processors in parallel architectures at the top of performance rankings allows a higher processing capacity. However, it also brings an increase in the fault rate which is directly proportional to the number of processors. Nowadays, coordinated checkpoint is the most widely used rollback technique for system recovery in the occurrence of faults in MPI applications. Nevertheless, projections point that this technique will be inappropriate for the emerging architectures. On the other hand, message logging seems to be more appropriate to this new scenario. This work consists in a proposal of pessimistic message logging (remote based) with non-coordinated checkpoint and the performance evaluation of an MPI communication mechanism that works over Publish/Subscriber channels in which the proposed message logging is based. The work is organized as following: an study of fault tolerant techniques used in HPC and the motivation for choosing this variant of message logging; a message logging proposal; an implementation of Open MPI communication over OpenAMQ; performance evaluation and comparision with the tradicional TCP/IP communication and a pessimistic message logging (sender based) from Open MPI distribution. The benchmark set is composed of NetPIPE, NAS Parallel Benchmarks and Virginia Hydrodynamics (VH-1).
518

FLEXLAB : Middleware de virtualização de hardware para gerenciamento centralizado de computadores em rede /

Cruz, Daniel Igarashi. January 2008 (has links)
Orientador: Marcos Antônio Cavenaghi / Banca: Renata Spolon Lobato / Banca: Ronaldo Lara Gonçalves / Resumo: O gerenciamento de um conglomerado de computadores em rede é uma atividade potencialmente complexa devido à natureza heterogênea destes equipamentos. Estas redes podem apresentar computadores com diferentes configurações em sua camada de software básico e aplicativos em função das diferenças de configuração de hardware em cada nó da rede. Neste cenário, cada computador torna-se uma entidade gerenciada individualmente, exigindo uma atividade manual de configuração da imagem de sistema ou com automatização limitada à camada de aplicativos. Tecnologias que oferecem gestão centralizada, como arquiteturas thin-client ou terminal de serviços, penalizam o desempenho das estações e oferecem capacidade reduzida para atender um número crescente de usuários uma vez que todo o processamento dos aplicativos dos clientes é executado em um único nó da rede. Outras arquiteturas para gerenciamento centralizado que atuam em camada de software são ineficazes em oferecer uma administração baseada em uma imagem única de configuração dado o forte acoplamento entre as camadas de software e hardware. Compreendendo as deficiências dos modelos tradicionais de gerenciamento centralizado de computadores, o objetivo deste trabalho é o desenvolvimento do FlexLab, mecanismo de gerenciamento de computadores através de Imagem de Sistema Única baseado em um middleware de virtualização distribuída. Por meio do middleware de virtualização do FlexLab, os computadores em rede de um ambiente são capazes de realizar o processo de boot remoto a partir de uma Imagem de Sistema Única desenvolvida sobre um hardware virtualizado. Esta imagem é hospedada e acessada a partir de um servidor central da rede, padronizando assim as configurações de software básico e aplicativos mesmo em um cenário de computadores com configuração heterogênea de hardware, simplificando... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Computer network management is a potentially complex task due to the heterogeneous nature of the hardware configuration of these machines. These networks may offer computers with different configuration in their basic software layer due to the configuration differences in their hardware layer and thus, in this scenario, each computer becomes an individual managed entity in the computer network and then requiring an individual and manually operated configuration procedure or automated maintenance restricted to application layer. Thin-client or terminal services do offer architectures for centralized management, however these architectures impose performance penalties for client execution and offer reduced scalability support in order to serve a growing number of users since all application processing is hosted and consume processing power of a single network node: the server. In the other hand, architectures for centralized management based on applications running over software layer are inefficient in offer computer management based on a single configuration image due to the tight coupling between software and hardware layers. Understanding the drawbacks of the theses centralized computer management solutions, the aim of this project is to develop the FlexLab, centralized computer management architecture through a Single System Image based on a distributed virtualization middleware. Through FlexLab virtualization middleware, the computers of a network environment are able to remote boot from a Single System Image targeting the virtual machine hardware. This Single System Image is hosted at a central network server and thus, standardizing basic software and applications configurations for networks with heterogeneous computer hardware configuration which simplifies computer management since all computers may be managed through a Single System Image. The experiments have shown that... (Complete abstract click electronic access below) / Mestre
519

Personalização de programas de TV no contexto da TV digital portátil interativa

Gatto, Elaine Cecília 29 November 2010 (has links)
Made available in DSpace on 2016-06-02T19:05:49Z (GMT). No. of bitstreams: 1 3587.pdf: 2175854 bytes, checksum: fc0f86f0d9275b05bf85e6629788ee4e (MD5) Previous issue date: 2010-11-29 / Interactive Digital Television allows several services to be offered to users, in order to provide entertainment, e-learning and new mechanisms for social inclusion. Broadcasters and TV shoes may be created, which may gradually increase the amount of information to be displayed on the screen. As a result, users may experience discomfort and difficulties in finding information that really matters. In a portable environment, the user wants to make the most of his time when watching TV. Thus, investing too much time in searching for TV shows of interest is something undesirable. Recommender systems are used to minimize such shortcomings, help users with their searches for contents of interest and also to reduce the time spent on searches. This study focused on the development of a hybrid recommender system called BIPODiTVR, which is able to make recommendations from the observation of users behavior while watching television on the portable device. This system recommends TV shows through a collaborative and content-based filtering. The proposed system was evaluated by using metrics of acceptability of recommender systems applied to data provided by IBOPE, from six households for the period of fifteen days. / A Televisão Digital Interativa permite que diversos serviços sejam oferecidos aos usuários, possibilitando entretenimento, educação à distância e novos mecanismos para a inclusão social. Emissoras e programas de TV podem ser criados, o que pode aumentar gradativamente a quantidade de informação disponível a ser visualizada nas telas. Como consequência, os usuários podem ter dificuldades em encontrar as informações que realmente interessam. No ambiente portátil o usuário deseja aproveitar ao máximo o seu tempo de visualização de TV, ou seja, investir tempo demasiado para procurar programas de TV do seu interesse é algo indesejado. Os sistemas de recomendação permitem minimizar tais dificuldades, auxiliando os usuários na sua busca por conteúdos que sejam do seu interesse e também reduzindo o tempo gasto durante a busca. Este trabalho tem como foco o desenvolvimento de um sistema de recomendação híbrido, denominado BIPODiTVR, que é capaz de recomendar conteúdo adequado a partir da observação do comportamento do usuário durante o uso da televisão no seu dispositivo portátil. O sistema recomenda programas de TV aos usuários utilizando as técnicas de Filtragem Colaborativa e Filtragem Baseada em Conteúdo e foi avaliado a partir de métricas de aceitabilidade de sistemas de recomendação baseado em dados fornecidos pelo IBOPE de seis domicílios em um período de quinze dias.
520

Comunicação direta entre dispositivos usando o modelo centrado em conteúdo

Floôr, Igor Maldonado 13 November 2015 (has links)
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-09-23T18:25:13Z No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:23:42Z (GMT) No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:23:49Z (GMT) No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Made available in DSpace on 2016-10-10T14:23:59Z (GMT). No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) Previous issue date: 2015-11-13 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / The popularization of mobile devices capable of communicating via wireless network technologies allows us to consider different scenarios in which these devices may autonomously interact with each other. The envisioned communications would occur in a P2P fashion, as each device could simultaneously provide and consume services. A mechanism for dynamically discovering nearby devices and the available services would be necessary. Although a few existing applications already provide the direct interaction among devices they are purpose-specific and rely on pre-configured information for identifying other devices. A service-oriented architecture (SOA), based on HTTP requests and the REST or SOAP protocols, is commonly used in this type of communication. However, automatically finding available known services is still challenging. Service discovery is usually based exclusively on service name, which is not very flexible. This work proposes a new model for the direct interaction between computing devices. In an attempt to facilitate service discovery and selection we propose a content centric model in which interactions are defined according to an object’s type and the action to be applied to it. The proposed approach can workatop of existing discovery protocols, based on extensible metadata fields and on existing service data. Our proposal is evaluated according to i) the viability of direct communication between nearby devices, even when carried by users or associated to vehicles; ii) the proposed service discovery and matching using the content centric approach; iii) the effectiveness of a middleware to support the development of generic applications for direct device communication. Simulation results show our proposed model is viable. A preliminary implementation of the middleware was also evaluated and the results show that spontaneous, opportunistic, service-based interactions among devices can be achieved for different types of services. / A popularização de dispositivos móveis dotados de capacidade de comunicação sem fio possibilita a criação de ambientes onde estes dispositivos interagem diretamente entre si. Essas comunicações ocorrem no modelo P2P, de forma que cada dispositivo pode implementar simultaneamente papéis de cliente e de servidor. Contudo, para que ocorram interações di- retas entre dispositivos através de aplicações, é preciso que estes dispositivos implementem algum mecanismo de descoberta. Atualmente, a maioria das aplicações que se comunicam diretamente utilizam informações pré-configuradas para identificação de dispositivos e serviços. Uma forma utilizada para interação entre dispositivos é através da oferta e consumo de serviços utilizando a arquitetura orientada a serviços (SOA), baseada em requisições HTTP utilizando os padrões REST ou SOAP. Um problema recorrente para consumidores de serviços é a identificação de serviços disponíveis. A identificação utilizada em protoco- los de descoberta existentes baseia-se apenas no nome do serviço, salvo em comunicações pré-configuradas, o que não apresenta flexibilidade para descobrir novos serviços. De forma a facilitar a troca de informações entre dispositivos, este trabalho propõe um modelo em que interações diretas entre dispositivos sejam centradas no conteúdo envolvido na interação e nas ações que se deseja realizar sobre eles. Para tanto, uma identificação de serviço pode ser baseada em metadados que são adicionados às descrições de serviços existentes, ou em informações obtidas com protocolos de descoberta de serviço existentes. Para avaliar o modelo proposto, esse trabalho apresenta um estudo sobre i) a viabilidade de interações diretas entre dispositivos, considerando suas mobilidades; ii) o uso de um modelo de interação centrado em conteúdo e ação; iii) o desenvolvimento de um Middleware para simplificar o desenvolvimento de aplicações que usem o modelo de serviço proposto. Os resultados de simulação obtidos mostram que o modelo é viável. Além disso, uma versão preliminar do Middleware proposto foi avaliada e mostra que a interação direta entre dispositivos pode ocorrer de forma oportunística e espontânea.

Page generated in 0.2997 seconds