• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 13
  • 10
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 59
  • 41
  • 39
  • 35
  • 28
  • 26
  • 26
  • 23
  • 21
  • 21
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Hybrid real-time operating system integrated with middleware for resource-constrained wireless sensor nodes / Système d'exploitation temps-réel hybride intégré avec un middelware pour les noeuds capteurs sans fil contraints en ressources

Liu, Xing 30 June 2014 (has links)
Avec les avancées récentes en microélectronique, en traitement numérique et en technologie de communication, les noeuds de réseau de capteurs sans fil (noeud RCSF) deviennent de moins en moins encombrants et coûteux. De ce fait la technologie de RCSF est utilisée dans de larges domaines d’application. Comme les noeuds RCSF sont limités en taille et en coût, ils sont en général équipés d’un petit microcontrôleur de faible puissance de calcul et de mémoire etc. De plus ils sont alimentés par une batterie donc son énergie disponible est limitée. A cause de ces contraintes, la plateforme logicielle d’un RCSF doit consommer peu de mémoire, d’énergie, et doit être efficace en calcul. Toutes ces contraintes rendent les développements de logiciels dédiés au RCSF très compliqués. Aujourd’hui le développement d’un système d’exploitation dédié à la technologie RCSF est un sujet important. En effet avec un système d’exploitation efficient, les ressources matérielles d’une plateforme RCSF peuvent être utilisées efficacement. De plus, un ensemble de services système disponibles permet de simplifier le développement d’une application. Actuellement beaucoup de travaux de recherche ont été menés pour développer des systèmes d’exploitation pour le RCSF tels que TinyOS, Contiki, SOS, openWSN, mantisOS et simpleRTJ. Cependant plusieurs défis restent à relever dans le domaine de système d’exploitation pour le RCSF. Le premier des défis est le développement d’un système d’exploitation temps réel à faible empreinte mémoire dédié au RCSF. Le second défi est de développer un mécanisme permettant d’utiliser efficacement la mémoire et l’énergie disponible d’un RCSF. De plus, comment fournir un développement d’application pour le RCSF reste une question ouverte. Dans cette thèse, un nouveau système d’exploitation hybride, temps réel à énergie efficiente et à faible empreinte mémoire nommé MIROS dédié au RCSF a été développé. Dans MIROS, un ordonnanceur hybride a été adopté ; les deux ordonnanceurs évènementiel et multithread ont été implémentés. Avec cet ordonnanceur hybride, le nombre de threads de MIROS peut être diminué d’une façon importante. En conséquence, les avantages d’un système d’exploitation évènementiel qui consomme peu de ressource mémoire et la performance temps réel d’un système d’exploitation multithread ont été obtenues. De plus, l’allocation dynamique de la mémoire a été aussi réalisée dans MIROS. La technique d’allocation mémoire de MIROS permet l’augmentation de la zone mémoire allouée et le réassemblage des fragments de mémoire. De ce fait, l’allocation de mémoire de MIROS devient plus flexible et la ressource mémoire d’un noeud RCSF peut être utilisée efficacement. Comme l’énergie d’un noeud RCSF est une ressource à forte contrainte, le mécanisme de conservation d’énergie a été implanté dans MIROS. Contrairement aux autres systèmes d’exploitation pour RCSF où la conservation d’énergie a été prise en compte seulement en logiciel, dans MIROS la conservation d’énergie a été prise en compte à la fois en logiciel et en matériel. Enfin, pour fournir un environnement de développement convivial aux utilisateurs, un nouveau intergiciel nommé EMIDE a été développé et intégré dans MIROS. EMIDE permet le découplage d’une application de système. Donc le programme d’application est plus simple et la reprogrammation à distance est plus performante, car seulement les codes de l’application seront reprogrammés. Les évaluations de performance de MIROS montrent que MIROS est un système temps réel à faible empreinte mémoire et efficace pour son exécution. De ce fait, MIROS peut être utilisé dans plusieurs plateformes telles que BTnode, IMote, SenseNode, TelosB et T-Mote Sky. Enfin, MIROS peut être utilisé pour les plateformes RCSF à fortes contraintes de ressources. / With the recent advances in microelectronic, computing and communication technologies, wireless sensor network (WSN) nodes have become physically smaller and more inexpensive. As a result, WSN technology has become increasingly popular in widespread application domains. Since WSN nodes are minimized in physical size and cost, they are mostly restricted to platform resources such as processor computation ability, memory resources and energy supply. The constrained platform resources and diverse application requirements make software development on the WSN platform complicated. On the one hand, the software running on the WSN platform should be small in the memory footprint, low in energy consumption and high in execution efficiency. On the other hand, the diverse application development requirements, such as the real-time guarantee and the high reprogramming performance, should be met by the WSN software. The operating system (OS) technology is significant for the WSN proliferation. An outstanding WSN OS can not only utilize the constrained WSN platform resources efficiently, but also serve the WSN applications soundly. Currently, a set of WSN OSes have been developed, such as the TinyOS, the Contiki, the SOS, the openWSN and the mantisOS. However, many OS development challenges still exist, such as the development of a WSN OS which is high in real-time performance yet low in memory footprint; the improvement of the utilization efficiency to the memory and energy resources on the WSN platforms, and the providing of a user-friendly application development environment to the WSN users. In this thesis, a new hybrid, real-time, energy-efficient, memory-efficient, fault-tolerant and user-friendly WSN OS MIROS is developed. MIROS uses the hybrid scheduling to combine the advantages of the event-driven system's low memory consumption and the multithreaded system's high real-time performance. By so doing, the real-time scheduling can be achieved on the severely resource-constrained WSN platforms. In addition to the hybrid scheduling, the dynamic memory allocators are also realized in MIROS. Differing from the other dynamic allocation approaches, the memory heap in MIROS can be extended and the memory fragments in the MIROS can be defragmented. As a result, MIROS allocators become flexible and the memory resources can be utilized more efficiently. Besides the above mechanisms, the energy conservation mechanism is also implemented in MIROS. Different from most other WSN OSes in which the energy resource is conserved only from the software aspect, the energy conservation in MIROS is achieved from both the software aspect and the multi-core hardware aspect. With this conservation mechanism, the energy cost reduced significantly, and the lifetime of the WSN nodes prolonged. Furthermore, MIROS implements the new middleware software EMIDE in order to provide a user-friendly application development environment to the WSN users. With EMIDE, the WSN application space can be decoupled from the low-level system space. Consequently, the application programming can be simplified as the users only need to focus on the application space. Moreover, the application reprogramming performance can be improved as only the application image other than the monolithic image needs to be updated during the reprogramming process. The performance evaluation works to the MIROS prove that MIROS is a real-time OS which has small memory footprint, low energy cost and high execution efficiency. Thus, it is suitable to be used on many WSN platforms including the BTnode, IMote, SenseNode, TelosB, T-Mote Sky, etc. The performance evaluation to EMIDE proves that EMIDE has less memory cost and low energy consumption. Moreover, it supports small-size application code. Therefore, it can be used on the high resource-constrained WSN platforms to provide a user-friendly development environment to the WSN users.
152

Persistent memory and orthogonal persistence : a persistent heap design and its implementation for the Java virtual machine / Mem?ria persistente e persist?ncia ortogonal : um projeto heap persistente e sua implementa??o para a m?quina virtual Java

Perez, Taciano Dreckmann 03 May 2017 (has links)
Submitted by Caroline Xavier (caroline.xavier@pucrs.br) on 2017-07-03T15:03:25Z No. of bitstreams: 1 TES_TACIANO_DRECKMANN_PEREZ_COMPLETO.pdf: 1779781 bytes, checksum: 2111ccea963be0eea76bdbb7d24321d9 (MD5) / Made available in DSpace on 2017-07-03T15:03:25Z (GMT). No. of bitstreams: 1 TES_TACIANO_DRECKMANN_PEREZ_COMPLETO.pdf: 1779781 bytes, checksum: 2111ccea963be0eea76bdbb7d24321d9 (MD5) Previous issue date: 2017-05-03 / Sistemas computacionais da atualidade tradicionalmente separam mem?ria e armazenamento. Linguagens de programa??o tipicamente refletem essa distin??o usando diferentes representa??es para dados em mem?ria (ex. estruturas de dados, objetos) e armazenamento (ex. arquivos, bancos de dados). A movimenta??o de dados entre esses dois n?veis e representa??es, bidirecionalmente, compromete tanto a efici?ncia do programador quanto de execu??o dos programas. Tecnologias recentes de memoria n?o-vol?til, tais como mem?ria de transi??o de fase, resistiva e magnetoresistiva, possibilitam combinar mem?ria principal e armazenamento em uma ?nica entidade de mem?ria persistente, abrindo caminho para abstra??es mais eficientes para lidar com persist?ncia de dados. Essa tese de doutorado introduz uma abordagem de projeto para o ambiente de execu??o de linguagens com ger?ncia autom?tica de mem?ria, baseado numa combina??o original de persist?ncia ortogonal, programa??o para mem?ria persistente, persist?ncia por alcance, e transa??es com atomicidade em caso de falha. Esta abordagem pode melhorar significativamente a produtividade do programador e a efici?ncia de execu??o dos programas, uma vez que estruturas de dados em mem?ria passam a ser persistentes de forma transparente, sem a necessidade de programar explicitamente o armazenamento, e removendo a necessidade de cruzar fronteiras sem?nticas. De forma a validar e demonstrar a abordagem proposta, esse trabalho tamb?m apresenta JaphaVM, a primeira M?quina Virtual Java especificamente projetada para mem?ria persistente. Resultados experimentais usando benchmarks e aplica??es reais demonstram que a JaphaVM, na maioria dos casos, executa as mesmas opera??es cerca de uma a duas ordens de magnitude mais rapidamente do que implementa??es equivalentes usando bancos de dados ou arquivos, e, ao mesmo tempo, requer significativamente menos linhas de c?digo. / Current computer systems separate main memory from storage. Programming languages typically reflect this distinction using different representations for data in memory (e.g. data structures, objects) and storage (e.g. files, databases). Moving data back and forth between these different layers and representations compromise both programming and execution efficiency. Recent nonvolatile memory technologies, such as Phase-Change Memory, Resistive RAM, and Magnetoresistive RAM make it possible to collapse main memory and storage into a single layer of persistent memory, opening the way for simpler and more efficient programming abstractions for handling persistence. This Ph.D. thesis introduces a design for the runtime environment for languages with automatic memory management, based on an original combination of orthogonal persistence, persistent memory programming, persistence by reachability, and lock-based failure-atomic transactions. Such design can significantly increase programming and execution efficiency, as in-memory data structures are transparently persistent, without the need for programmatic persistence handling, and removing the need for crossing semantic boundaries. In order to validate and demonstrate the proposed concepts, this work also presents JaphaVM, the first Java Virtual Machine specifically designed for persistent memory. In experimental results using benchmarks and real-world applications, JaphaVM in most cases executed the same operations between one and two orders of magnitude faster than database- and file-based implementations, while requiring significantly less lines of code.
153

Μελέτη και ανάπτυξη τεχνικών για την αποτελεσματική διαχείριση πόρων σε δίκτυα πλέγματος και υποδομές υπολογιστικών νεφών

Κρέτσης, Αριστοτέλης 25 February 2014 (has links)
Οι τεχνολογίες κατανεμημένου υπολογισμού, όπως τα δίκτυα πλέγματος και οι υποδομές Νέφους, έχουν διαμορφώσει πλέον ένα καινούργιο περιβάλλον σχετικά με τον τρόπο που εκτελούνται οι εργασίες των χρηστών, αποθηκεύονται τα δεδομένα και γενικότερα χρησιμοποιούνται οι εφαρμογές. Τα δίκτυα πλέγματος αποτέλεσαν το επίκεντρο της σχετικής ερευνητικής δραστηριότητας για μεγάλο χρονικό διάστημα, με βασικό στόχο τη δημιουργία υποδομών για την εκτέλεση ερευνητικών εφαρμογών με πολύ υψηλές υπολογιστικές και αποθηκευτικές απαιτήσεις. Ωστόσο είναι πλέον προφανές ότι υπάρχει μια στροφή προς τις υποδομές Νέφους που προσφέρουν υπηρεσίες κατανεμημένου υπολογισμού και αποθήκευσης μέσω πλήρως διαχειρίσιμων πόρων. Η συγκεκριμένη μετάβαση έχει ως αποτέλεσμα μια μετατόπιση από το μοντέλο των πολλών και ισχυρών πόρων που βρίσκονται κατανεμημένοι σε διάφορες περιοχές του κόσμου (όπως στα δίκτυα πλέγματος) προς σχετικά λιγότερα αλλά πολύ μεγαλύτερα ως προς το μέγεθος κέντρα δεδομένων τα οποία αποτελούνται από χιλιάδες υπολογιστικούς πόρους οι οποίοι φιλοξενούν ακόμη περισσότερες εικονικές μηχανές. Η έρευνα που διεξάγαμε ακολούθησε αυτή την αλλαγή, μελετώντας αλγοριθμικά θέματα για δίκτυα πλέγματος και υποδομές Νεφών και αναπτύσσοντας μια σειρά από εργαλεία και εφαρμογές που διαχειρίζονται, παρακολουθούν και αξιοποιούν τους πόρους που προσφέρουν οι συγκεκριμένες υποδομές. Αρχικά, μελετούμε τα ζητήματα που προκύπτουν κατά την υλοποίηση αλγορίθμων χρονοπρογραμματισμού, που είχαν προηγουμένως μελετηθεί σε περιβάλλοντα προσομοίωσης, σε ένα πραγματικό σύστημα ενδιάμεσου λογισμικού για δίκτυα πλέγματος, και συγκεκριμένα το gLite. Το πρώτο ζήτημα που αντιμετωπίσαμε είναι το γεγονός ότι οι πληροφορίες που παρέχει το ενδιάμεσο λογισμικό gLite στους αλγορίθμους χρονοπρογραμματισμού δεν είναι πάντα έγκυρες, γεγονός που επηρεάζει την αποδοσή τους. Για την αντιμετώπιση του προβλήματος αναπτύξαμε ένα εσωτερικό, στο χρονοπρογραμματιστή, μηχανισμό που καταγράφει τις αποφάσεις του σχετικά με ποιές εργασίες ανατέθηκαν σε ποιούς υπολογιστικούς πόρους και λειτουργεί συµπληρωµατικά µε την υπηρεσία πληροφοριών του gLite. Επιπλέον, εξετάζουμε το ζήτημα του δίκαιου διαμοιρασμού της υπολογιστικής χωρητικότητας ενός πόρου στις εργασίες που έχουν ανατεθεί σε αυτόν. Για το σκοπό αυτό, επεκτείνουμε το ενδιάμεσο λογισμικό gLite ώστε να περιλαμβάνει ένα νέο μηχανισμό που μέσω της αξιοποίησης της τεχνολογίας εικονικοποίησης επιτρέπει τον ταυτόχρονο διαμοιρασμό της υπολογιστικής χωρητικότητας ενός κόμβου σε πολλές εργασίες. Στην συνέχεια εξατάζουμε το πρόβλημα της συνδυασμένης μεταφοράς πολλαπλών εικονικών μηχανών σε σύγχρονες υπολογιστικές υποδομές. Πιο συγκεκριμένα, προτείνουμε μια μεθοδολογία που στοχεύει στην καλύτερη χρησιμοποίηση των διαθέσιμων υπολογιστικών και δικτυακών πόρων, λαμβάνοντας υπόψη στις αποφάσεις σχετικά με τη συνδυασμένη μεταφορά εικονικών μηχανών τις αλληλεξαρτήσεις που δημιουργούνται από την επικοινωνία τους. Η προτεινόμενη μεθοδολογία χρησιμοποιεί την προσέγγιση πολλαπλών κριτηρίων για την επιλογή των εικονικών μηχανών που θα μετακινηθούν, αναθέτοντας διαφορετικά βάρη στα διάφορα κριτήρια ενδιαφέροντος. Επιπλέον, επιλέγει τους υπολογιστικούς κόμβους όπου οι μετακινούμενες εικονικές μηχανές θα φιλοξενηθούν, λαμβάνοντας υπόψη τον τρόπο με τον οποίο οι μετακινήσεις επηρεάζουν τις λογικές (ή εικονικές) τοπολογίες που σχηματίζονται από την επικοινωνία τους και αντιμετωπίζοντας τη συγκεκριμένη επιλογή ως ένα πρόβλημα αναδιάρθρωσης λογικών τοπολογιών. Η αξιολόγηση επιβεβαίωσε τη δυνατότητα της μεθοδολογίας να επιλύει, μέσω των κατάλληλων μετακινήσεων, ένα σημαντικό αριθμό προβλημάτων που οφείλονται σε ελλείψεις υπολογιστικών ή επικοινωνιακών πόρων, ελαχιστοποιώντας παράλληλα τον αριθμό των μετακινήσεων και την προκαλούμενη επιβάρυνση του δικτύου. Το επόμενο θέμα που εξετάζουμε αφορά το πρόβλημα της ανάλυσης δεδομένων επικοινωνίας μεταξύ εικονικών μηχανών οι οποίες φιλοξενούνται σε ένα κέντρο δεδομένων. Προτείνουμε και αξιολογούμε, μέσω της ανάλυσης δεδομένων από ένα πραγματικό κέντρο δεδομένων, την εφαρμογή μετρικών και τεχνικών από τη θεωρία ανάλυσης κοινωνικών δικτύων για τον προσδιορισμό σημαντικών εικονικών μηχανών, για παράδειγμα εικονικές μηχανές οι οποίες απαιτούν περισσότερο εύρος ζώνης σε σχέση με άλλες, και ομάδων εικονικών μηχανών που συσχετίζονται με κάποιο τρόπο μεταξύ τους. Μέσω της συγκεκριμένης προσέγγισης έχουμε τη δυνατότητα να εξάγουμε σημαντικές πληροφορίες οι οποίες μπορούν να αξιοποιηθούν για τη λήψη καλύτερων αποφάσεων σχετικά με τη διαχείριση του πολύ μεγάλου πλήθους των εικονικών μηχανών που φιλοξενούνται στα σύγχρονα κέντρα δεδομένων. Στη συνέχεια προσδιορίζουμε τρόπους με τους οποίους οι πληροφορίες παρακολούθησης που συλλέγονται από τη λειτουργία μιας δημόσιας υποδομής Υπολογιστικού Νέφους, και ιδίως από την υπηρεσία Amazon Web Services (AWS), μπορούν να χρησιμοποιηθούν με ένα αποδοτικό τρόπο προκειμένου να εξάγουμε πολύτιμες πληροφορίες, που μπορούν να αξιοποιηθούν από τους τελικούς χρήστες για την αποτελεσματικότερη διαχείριση των εικονικών πόρων τους. Πιο συγκεκριμένα, παρουσιάζουμε το σχεδιασμό και την υλοποίηση ενός εργαλείου ανοιχτού κώδικα, του SuMo, στο όποιο έχουμε υλοποίησει όλη την απαραίτητη λειτουργικότητα για τη συλλογή και ανάλυση δεδομένων παρακολούθησης από την υπηρεσία AWS. Επιπλέον, προτείνουμε ένα μηχανισμό για τη βελτιστοποίηση του κόστους και της αξιοποίησης (Cost and Utilization Optimization - CUO) των εικονικών υπολογιστικών πόρων της υπηρεσίας AWS. Ο μηχανισμός CUO χρησιμοποιεί πληροφορίες (πλήθος, ακριβή χαρακτηριστικά, ποσοστό αξιοποίησης) για τους διαθέσιμους εικονικούς πόρους ενός χρήστη και προτείνει ένα νέο (βέλτιστο) σύνολο πόρων που θα μπορούσαν να χρησιμοποιηθούν για την αποδοτικότερη εξυπηρέτηση του ίδιου φορτίου εργασίας με μειωμένο κόστος. Τέλος, παρουσιάζουμε την υλοποίηση ενός ολοκληρωμένου εργαλείου, που ονομάζουμε Mantis, για το σχεδιασμό και τη λειτουργία των μελλοντικών ευέλικτων (flex-grid) οπτικών δικτύων που υποστηρίζει επιπλέον οπτικά δίκτυα σταθερού πλέγματος τόσο μοναδικού ρυθμού μετάδοσης όσο και πολλαπλών ρυθμών μετάδοσης. Οι χρήστες έχουν τη δυνατότητα να καθορίζουν δικτυακές τοπολογίες, απαιτήσεις κίνησης, παραμέτρους για το κόστος απόκτησης και λειτουργίας των δικτυακών συσκευών, ενώ επιπλέον έχουν πρόσβαση σε αρκετούς αλγορίθμους για το σχεδιασμό, λειτουργία και αξιολόγηση διαφόρων οπτικών δικτύων. Το εργαλείο έχει σχεδιαστεί ώστε να μπορεί να λειτουργεί είτε ως υπηρεσία (Software as a Service) είτε ως κλασσική εφαρμογή (Desktop Application). Λειτουργώντας ως υπηρεσία παρέχει κλιμάκωση με βάση τις απαιτήσεις των χρηστών, αξιοποιώντας τα πλεονεκτήματα των υποδομών Υπολογιστικού Νέφους, εκτελώντας γρήγορα και αποτελεσματικά τις εργασίες των χρηστών. Για τη λειτουργία αυτή, μπορεί να χρησιμοποιεί τόσο δημόσιες υποδομές Υπολογιστικού Νέφους όπως η υπηρεσία Amazon Web Services (AWS) και η υπηρεσία της ΕΔΕΤ (~okeanos), όσο και ιδιωτικές που βασίζονται στο OpenStack. Επιπλέον, η αρθρωτή αρχιτεκτονική και η υλοποίηση των διαφόρων λειτουργικών τμημάτων επιτρέπουν την εύκολη επέκταση του εργαλείου ώστε να υποστηρίζει μελλοντικά περισσότερες υποδομές Υπολογιστικού Νέφους. / Distributed computing technologies, like grids and clouds, shape today a new environment, regarding the way tasks are executed, data are stored and retrieved, and applications are used. Though grids and desktop grids have been the focus of the research community for a long time, a shift has become evident today towards cloud and virtualization related technologies in general, which are supported by large computing factories, namely the data centers. As a result there is also a shift from the model of several powerful resources distributed at various locations in the world (as in grids) towards fewer huge data centers consisting of thousands of “simple” computers that host Virtual Machines. The research performed over the course of my PhD followed this shift, investigating algorithmic issues in the context of grids and then of clouds and developing a number of tools and applications that manage, monitor and utilize these kinds of resources. Initially, we describe the steps followed, the difficulties encountered, and the solutions provided in developing and evaluating a scheduling policy, initially implemented in a simulation environment, in the gLite grid middleware. Our focus is on a scheduling algorithm that allocates in a fair way the available resources among the requested users or jobs. During the actual implementation of this algorithm in gLite, we observed that the validity of the information used by the scheduler for its decisions affects greatly its performance. To improve the accuracy of this information, we developed an internal feedback mechanism that operates along with the scheduling algorithm. Also, a Grid computation resource cannot be shared concurrently between different users or jobs, making it difficult to provide actual fairness. For this reason we investigated the use of virtualization technology in the gLite middleware. We implement and evaluate our scheduling algorithm and the proposed mechanisms in a small gLite testbed. Next, we present a methodology, called communication-aware virtual infrastructures (COMAVI), for the concurrent migration of multiple Virtual Machines (VMs) in computing infrastructures, which aims at the optimum use of the available computational and network resources, by capturing the interdependencies between the communicating VMs. This methodology uses multiple criteria for selecting the VMs that will migrate, with different weights assigned to each of them. COMAVI also selects the computing sites where the migrating VMs will be hosted, by accounting for the way migration affects the logical (or virtual) topologies formed by the communicating VMs and viewing this selection as a logical topology reconfiguration problem. We apply COMAVI to two basic computing infrastructures that exhibit different constraints/criteria and characteristics: a grid infrastructure operating over a wide area network (WAN) and a data center infrastructure operating over a local area network (LAN). Through the presented methodology different communication-aware VM migration algorithms can be tailored to the needs of the resource provider. The algorithms presented resolve the maximum possible number of VM violations (due to computing or communication resource shortages), while tending to minimize the number of migrations performed, the induced network overhead, the logical topology reconfigurations required, and the corresponding service interruptions. We evaluate the proposed methods through simulations in realistic computing environments, and we exhibit their performance benefits. We also consider the use of social network analysis methods on communication traces, collected from Virtual Machines (VMs) located in computing infrastructures, like a data center. Our aim is to identify important VMs, for example VMs that require more bandwidth than other VMs or VMs that communicate often with other VMs. We believe that this approach can handle the large number of VMs present in computing infrastructures and their interactions in the same way social interactions of millions of people are analyzed in today’s social networks. We are interested in identifying measures that can locate these important VMs or groups of interacting VMs, missed through other usual metrics and also capture the time-dynamicity of their interactions. In our work we use real traces and evaluate the applicability of the considered methods and measures. In addition, we consider the analysis and optimization of public clouds. For this reason, we identify important algorithmic operations that should be part of a cloud analysis and optimization tool, including resource profiling, performance spike detection and prediction, resource resizing, and others, and we investigate ways in which the collected monitoring information can be processed towards these purposes. The analyzed information is valuable since it can drive important virtual resource management decisions. We also present an open-source tool we developed, called SuMo, which contains the necessary functionalities for collecting monitoring data from Amazon Web Services (AWS), analyzing them and providing resource optimization suggestions. We also present a Cost and Utilization Optimization (CUO) mechanism for optimizing the cost and the utilization of a set of running Amazon EC2 instances, which is formulated as an Integer Linear Programming (ILP) problem. This CUO mechanism receives information regarding the current set of instances used (their number, type, utilization) and proposes a new set of instances for serving the same load, so as to minimize cost and maximize utilization and performance efficiency. Finally, we present a network planning and operation tool, called Mantis, for designing the next generation optical networks, supporting both flexible and mixed line rate WDM networks. Through Mantis, the user is able to define the network topology, current and forecasted traffic matrices, CAPEX/OPEX parameters, set up basic configuration parameters, and use a library of algorithms to plan, operate, or run what-if scenarios for an optical network of interest. Mantis is designed to be deployed either as a cloud service or as a desktop application. Using the cloud infrastructures features Mantis can scale according to the user demands, executing fast and efficiently the scenarios requested. Mantis supports different cloud platforms either public such as Amazon Elastic Compute Cloud (Amazon EC2) and ~okeanos the GRNET’s cloud service or private based on OpenStack, while its modular architecture allows other cloud infrastructures to be adopted in the future with minimum effort.
154

A layered JavaScript virtual machine supporting dynamic instrumentation

Lavoie, Erick 04 1900 (has links)
L’observation de l’exécution d’applications JavaScript est habituellement réalisée en instrumentant une machine virtuelle (MV) industrielle ou en effectuant une traduction source-à-source ad hoc et complexe. Ce mémoire présente une alternative basée sur la superposition de machines virtuelles. Notre approche consiste à faire une traduction source-à-source d’un programme pendant son exécution pour exposer ses opérations de bas niveau au travers d’un modèle objet flexible. Ces opérations de bas niveau peuvent ensuite être redéfinies pendant l’exécution pour pouvoir en faire l’observation. Pour limiter la pénalité en performance introduite, notre approche exploite les opérations rapides originales de la MV sous-jacente, lorsque cela est possible, et applique les techniques de compilation à-la-volée dans la MV superposée. Notre implémentation, Photon, est en moyenne 19% plus rapide qu’un interprète moderne, et entre 19× et 56× plus lente en moyenne que les compilateurs à-la-volée utilisés dans les navigateurs web populaires. Ce mémoire montre donc que la superposition de machines virtuelles est une technique alternative compétitive à la modification d’un interprète moderne pour JavaScript lorsqu’appliqué à l’observation à l’exécution des opérations sur les objets et des appels de fonction. / Run-time monitoring of JavaScript applications is typically achieved by instrumenting a production virtual machine or through ad-hoc, complex source-to-source transformations. This dissertation presents an alternative based on virtual machine layering. Our approach performs a dynamic translation of the client program to expose low-level operations through a flexible object model. These low-level operations can then be redefined at run time to monitor the execution. In order to limit the incurred performance overhead, our approach leverages fast operations from the underlying host VM implementation whenever possible, and applies Just-In-Time compilation (JIT) techniques within the added virtual machine layer. Our implementation, Photon, is on average 19% faster than a state-of-the-art interpreter, and between 19× and 56× slower on average than the commercial JIT compilers found in popular web browsers. This dissertation therefore shows that virtual machine layering is a competitive alternative approach to the modification of a production JavaScript interpreter when applied to run-time monitoring of object operations and function calls.
155

An?lise de desempenho de sistemas distribu?dos de grande porte na plataforma Java

Lima, Gleydson de Azevedo Ferreira 02 February 2007 (has links)
Made available in DSpace on 2014-12-17T14:55:13Z (GMT). No. of bitstreams: 1 GleydsonAFL.pdf: 2185056 bytes, checksum: 094b58b4341884e4f12979b1aa1e99e0 (MD5) Previous issue date: 2007-02-02 / The lava Platform is increasing1y being adopted in the development of distributed sys?tems with higb user demando This kind of application is more complex because it needs beyond attending the functional requirements, to fulfil1 the pre-established performance parameters. This work makes a study on the Java Vutual Machine (JVM), approaching its intemal aspects and exploring the garbage collection strategies existing in the literature and used by the NM. It also presents a set of tools that helps in the job of optimizing applications and others that help in the monitoring of applications in the production envi?ronment. Doe to the great amount of technologies that aim to solve problems which are common to the application layer, it becomes difficult to choose the one with best time response and less memory usage. This work presents a brief introduction to each one of tbe possible technologies and realize comparative tests through a statistical analysis of the response time and garbage collection activity random variables. The obtained results supply engineers and managers with a subside to decide which technologies to use in large applications through the knowledge of how they behave in their environments and the amount of resources that they consume. The relation between the productivity of the technology and its performance is also considered ao important factor in this choice / A plataforma Java vem sendo crescentemente adotada no desenvolvimento de siste?mas distribu?dos de alta demanda de usu?rios. Este tipo de aplica??o ? mais complexa pois necessita al?m de atender os requisitos funcionais cumprir os par?metros de desem?penho pr?-estabelecidos. Este trabalho realiza um estudo da m?quina virtual lava (NM) abordando seus aspectos internos e explorando as pol?ticas de coleta de lixo existentes na literatura e as usadas pela JVM. Apresenta tamb?m um conjunto de ferramentas que auxiliam Da tarefa de otimizar aplica??es e outras que auxiliam no monitoramento das aplica??es em produ??o. Diante da grande quantidade de tecnologias que se apresentam para solucionar pro?blemas inerentes ?s camadas das aplica??es, toma-se dif?cil realizar escolha daquela que possui o melhor tempo de resposta e o menor uso de mem?ria. O trabalho apresenta um breve referencial te?rico de cada uma das poss?veis tecnologias e realiza testes compara?tivos atrav?s de uma an?lise estat?stica da vari?vel aleat?ria do tempo de resposta e das atividades de coleta de lixo. Os resultados obtidos fornecem um subs?dio para engenheiros e gerentes decidirem quais tecnologias utilizarem em aplica??es de grande porte atrav?s do conhecimento de como elas se comportam nestes ambientes e a quantidade de recursos que consomem. A rela??o entre produtividade da tecnologia e seu desempenho tamb?m ? considerada como um fator importante nesta escolha
156

Container Hosts as Virtual Machines : A performance study

Aspernäs, Andreas, Nensén, Mattias January 2016 (has links)
Virtualization is a technique used to abstract the operating system from the hardware. The primary gains of virtualization is increased server consolidation, leading to greater hardware utilization and infrastructure manageability. Another technology that can be used to achieve similar goals is containerization. Containerization is an operating-system level virtualization technique which allows applications to run in partial isolation on the same hardware. Containerized applications share the same Linux kernel but run in packaged containers which includes just enough binaries and libraries for the application to function. In recent years it has become more common to see hardware virtualization beneath the container host operating systems. An upcoming technology to further this development is VMware’s vSphere Integrated Containers which aims to integrate management of Linux Containers with the vSphere (a hardware virtualization platform by VMware) management interface. With these technologies as background we set out to measure the impact of hardware virtualization on Linux Container performance by running a suite of macro-benchmarks on a LAMP-application stack. We perform the macro-benchmarks on three different operating systems (CentOS, CoreOS and Photon OS) in order to see if the choice of container host affects the performance. Our results show a decrease in performance when comparing a hardware virtualized container host to a container hosts running directly on the hardware. However, the impact on containerized application performance can vary depending on the actual application, the choice of operating system and even the type of operation performed. It is therefore important to consider these three items before implementing container hosts as virtual machines.
157

Semantic Analysis of Web Pages for Task-based Personal Web Interactions

Manjunath, Geetha January 2013 (has links) (PDF)
Mobile widgets now form a new paradigm of simplified web. Probably, the best experience of the Web is when a user has a widget for every frequently executed task, and can execute it anytime, anywhere on any device. However, the current method of programmatically creating personally relevant mobile widgets for every user does not scale. Creation of these mobile web widgets requires application programming as well as knowledge of web-related protocols. Furthermore, these mobile widgets are also limited to smart phones with data connectivity and such smart phones form just about 15% of the mobile phones in India. How do we make web accessible on devices that most people can afford? How does one create simple relevant tasks for the numerous diverse needs of every person? In this thesis, we attempt to address these issues and propose a new method of web simplification that enables an end-user to create simple single-click widgets for a complex personal task - without any programming. The proposed solution enables even low-end phones to access personal web tasks over SMS and voice. We propose a system that enables end users to create personal widgets via programming-by-browsing. A new concept called Tasklets to represent a user’s personal interaction, and a notion of programming over websites using a Web Virtual Machine are presented. Ensuring correct execution of these end user widgets posed interesting problems in web data mining and required us to investigate new methods to characterize and semantically model browser-based interactions. In particular, an instruction set for programming over web sites, new domain-specific similarity measures using ontologies, algorithms for frequent-pattern mining of web interactions and change detection with a proof of its NP-completeness are presented. A quantitative metric to measure the interaction complexity of web browsing and a method of classifying relational data using semantics hidden in the schema are introduced as well. This new web architecture to enable multi-device access to user's personal tasks over low-end phones was piloted with real users, as a solution named SiteOnMobile, and has received very positive response.
158

Studies In Automatic Management Of Storage Systems

Pipada, Pankaj 06 1900 (has links) (PDF)
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
159

Balancing Money and Time for OLAP Queries on Cloud Databases

Sabih, Rafia January 2016 (has links) (PDF)
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.
160

A layered JavaScript virtual machine supporting dynamic instrumentation

Lavoie, Erick 04 1900 (has links)
No description available.

Page generated in 0.0272 seconds