• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Novel Application Models and Efficient Algorithms for Offloading to Clouds

González Barrameda, José Andrés January 2017 (has links)
The application offloading problem for Mobile Cloud Computing aims at improving the mobile user experience by leveraging the resources of the cloud. The execution of the mobile application is offloaded to the cloud, saving energy at the mobile device or speeding up the execution of the application. We improve the accuracy and performance of application offloading solutions in three main directions. First, we propose a novel fine-grained application model that supports complex module dependencies such as sequential, conditional and parallel module executions. The model also allows for multiple offloading decisions that are tailored towards the current application, network, or user contexts. As a result, the model is more precise in capturing the structure of the application and supports more complex offloading solutions. Second, we propose three cost models, namely, average-based, statistics-based and interval-based cost models, defined for the proposed application model. The average-based approach models each module cost by the expected cost value, and the expected cost of the entire application is estimated considering each of the three module dependencies. The novel statistics-based cost model employs Cumulative Distribution Function (CDFs) to represent the costs of the modules and of the mobile application, which is estimated considering the cost and dependencies of the modules. This cost model opens the doors for new statistics-based optimization functions and constraints whereas the state of the art only support optimizations based on the average running cost of the application. Furthermore, this cost model can be used to perform statistical analysis of the performance of the application in different scenarios such as varying network data rates. The last cost model, the interval-based, represents the module costs via intervals in order to addresses the cost uncertainty while having lower requirements and computational complexity than the statistics-based model. The cost of the application is estimated as an expected maximum cost via a linear optimization function. Finally, we present offloading decision algorithms for each cost model. For the average-based model, we present a fast optimal dynamic programming algorithm. For the statistics-based model, we present another fast optimal dynamic programming algorithm for the scenario where the optimization function meets specific properties. Finally, for the interval-based cost model, we present a robust formulation that solves a linear number of linear optimization problems. Our evaluations verify the accuracy of the models and show higher cost savings for our solutions when compared to the state of the art.
2

From mobile to cloud : Using bio-inspired algorithms for collaborative application offloading / Du mobile au cloud : Utilisation d'algorithmes bio-inspirés pour le déploiement d'applications collaboratives

Golchay, Roya 26 January 2016 (has links)
Actuellement les smartphones possèdent un grand éventail de fonctionnalités. Ces objets tout en un, sont constamment connectés. Il est l'appareil favori plébiscité par les utilisateurs parmi tous les dispositifs de communication existants. Les applications actuelles développées pour les smartphones doivent donc faire face à une forte augmentation de la demande en termes de fonctionnalités tandis que - dans un même temps - les smartphones doivent répondre à des critères de compacité et de conception qui les limitent en énergie et à un environnement d'exécution pauvre en ressources. Utiliser un système riche en ressource est une solution classique introduite en informatique dans les nuages mobiles (Mobile Cloud Computing), celle-ci permet de contourner les limites des appareils mobiles en exécutant à distance, toutes ou certaines parties des applications dans ces environnements de nuage. Certaines architectures émergent, mais peu d'algorithmes existent pour traiter les propriétés dynamiques de ces environnements. Dans cette thèse, nous focalisons notre intérêt sur la conception d'ACOMMA (Ant-inspired Collaborative Offloading Middleware for Mobile Applications), un interlogiciel d'exécution déportée collaborative inspirée par le comportement des fourmis, pour les applications mobiles. C'est une architecture orientée service permettant de décharger dynamiquement des partitions d'applications, de manière simultanée, sur plusieurs clouds éloignés ou sur un cloud local créé spontanément, incluant les appareils du voisinage. Les principales contributions de cette thèse sont doubles. Si beaucoup d'intergiciels traitent un ou plusieurs défis relatifs à l'éxecution déportée, peu proposent une architecture ouverte basée sur des services qui serait facile à utiliser sur n'importe quel support mobile sans aucun exigence particulière. Parmi les principaux défis il y a les questions de quoi et quand décharger dans cet environnement très dynamique. A cette fin, nous développons des algorithmes de prises de décisions bio-inspirées : un processus de prise de décision bi-objectif dynamique avec apprentissage et un processus de prise de décision en collaboration avec les autres dispositifs mobiles du voisinage. Nous définissons un mécanisme de dépôt d'exécution avec une méthode de partitionnement grain fin de son graphe d'appel. Nous utilisons les algorithmes des colonies de fourmis pour optimiser bi-objectivement la consommation du CPU et le temps total d'exécution, en incluant la latence du réseau. Nous montrons que les algorithmes des fourmis sont plus facilement re-adaptables face aux modifications du contexte, peuvent être très efficaces en ajoutant des algorithmes de cache par comparaison de chaîne (string matching caching) et autorisent facilement la dissémination du profil de l'application afin de créer une exécution déportée collaborative dans le voisinage. / Not bounded by time and place, and having now a wide range of capabilities, smartphones are all-in-one always connected devices - the favorite devices selected by users as the most effective, convenient and neces- sary communication tools. Current applications developed for smartphones have to face a growing demand in functionalities - from users, in data collecting and storage - from IoT device in vicinity, in computing resources - for data analysis and user profiling; while - at the same time - they have to fit into a compact and constrained design, limited energy savings, and a relatively resource-poor execution environment. Using resource- rich systems is the classic solution introduced in Mobile Cloud Computing to overcome these mobile device limitations by remotely executing all or part of applications to cloud environments. The technique is known as application offloading. Offloading to a cloud - implemented as geographically-distant data center - however introduces a great network latency that is not acceptable to smartphone users. Hence, massive offloading to a centralized architecture creates a bottleneck that prevents scalability required by the expanding market of IoT devices. Fog Computing has been introduced to bring back the storage and computation capabilities in the user vicinity or close to a needed location. Some architectures are emerging, but few algorithms exist to deal with the dynamic properties of these environments. In this thesis, we focus our interest on designing ACOMMA, an Ant-inspired Collaborative Offloading Middleware for Mobile Applications that allowing to dynamically offload application partitions - at the same time - to several remote clouds or to spontaneously-created local clouds including devices in the vicinity. The main contributions of this thesis are twofold. If many middlewares dealt with one or more of offloading challenges, few proposed an open architecture based on services which is easy to use for any mobile device without any special requirement. Among the main challenges are the issues of what and when to offload in a dynamically changing environment where mobile device profile, context, and server properties play a considerable role in effectiveness. To this end, we develop bio-inspired decision-making algorithms: a dynamic bi-objective decision-making process with learning, and a decision-making process in collaboration with other mobile devices in the vicinity. We define an offloading mechanism with a fine-grained method-level application partitioning on its call graph. We use ant colony algorithms to optimize bi-objectively the CPU consumption and the total execution time - including the network latency.

Page generated in 0.1393 seconds