• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 20
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 92
  • 34
  • 29
  • 20
  • 18
  • 17
  • 16
  • 16
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Extending the battery life of mobile device by computation offloading

Qian, Hao January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Daniel A. Andresen / The need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance.
42

The Reading of Rotated Text - An Embodied Account

January 2013 (has links)
abstract: Individuals engaged in perceptual tasks often use their bodies to lighten the cognitive load, that is, they replace internal (mental) processing with external (body-based) processing. The present investigation explores how the body is used in the task of reading rotated text. The experimental design allowed the participants to exhibit spontaneous behavior and choose what strategies to use in order to efficiently complete the task. The results demonstrate that the use of external strategies can benefit performance by offloading internal processing. / Dissertation/Thesis / M.S. Psychology 2013
43

Offloading Virtual Network Functions – Hierarchical Approach

Langlet, Jonatan January 2020 (has links)
Next generation mobile networks are designed to run in a virtualized environment, enabling rapid infrastructure deployment and high flexibility for coping with increasing traffic demands and new service requirements. Such network function virtualization imposes additional packet latencies and potential bottlenecks not present in legacy network equipment when run on dedicated hardware; such bottlenecks include PCIe transfer delays, virtualization overhead, and utilizing commodity server hardware which is not optimized for packet processing operations.Through recent developments in P4 programmable networking devices, it is possible to implement complex packet processing pipelines directly in the network data plane; allowing critical traffic flows to be offloaded and flexibly hardware accelerated on new programmable packet processing hardware, prior to entering the virtualized environment.In this thesis, we design and implement a novel hybrid NFV processing architecture which integrates programmable NICs and commodity server hardware, capable of offloading virtual network functions for specified traffic flows directly to the server network card; allowing these flows to completely bypass softwarization overhead, while less sensitive traffic process on the underlying host server.An evaluation in a testbed with customized traffic generators show that accelerated flows have significantly lower jitter and latency, compared with flows processed on commodity server hardware. Our evaluation gives important insights into the designs of such hardware accelerated virtual network deployments, showing that hybrid network architectures are a viable solution for enabling infrastructure scalability without sacrificing critical flow performance.
44

Distributed Orchestration Framework for Fog Computing

Rahafrouz, Amir January 2019 (has links)
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
45

Les véhicules comme un mobile cloud : modélisation, optimisation et analyse des performances / Vehicles as a mobile cloud : modelling, optimization and performance analysis

Vigneri, Luigi 11 July 2017 (has links)
La prolifération des appareils portables mène à une croissance du trafic mobile qui provoque une surcharge du cœur du réseau cellulaire. Pour faire face à un tel problème, plusieurs travaux conseillent de stocker les contenus (fichiers et vidéos) dans les small cells. Dans cette thèse, nous proposons d'utiliser les véhicules comme des small cells mobiles et de cacher les contenus à bord, motivés par le fait que la plupart d'entre eux pourra facilement être équipée avec de la connectivité et du stockage. L'adoption d'un tel cloud mobile réduit les coûts d'installation et de maintenance et présente des contraintes énergétiques moins strictes que pour les small cells fixes. Dans notre modèle, un utilisateur demande des morceaux d'un contenu aux véhicules voisins et est redirigé vers le réseau cellulaire après une deadline ou lorsque son playout buffer est vide. L'objectif du travail est de suggérer à un opérateur comment répliquer de manière optimale les contenus afin de minimiser le trafic mobile dans le cœur du réseau. Les principales contributions sont : (i) Modélisation. Nous modélisons le scénario ci-dessus en tenant compte de la taille des contenus, de la mobilité et d'un certain nombre d'autres paramètres. (ii) Optimisation. Nous formulons des problèmes d'optimisation pour calculer les politiques d'allocation sous différents modèles et contraintes. (iii) Analyse des performances. Nous développons un simulateur MATLAB pour valider les résultats théoriques. Nous montrons que les politiques de mise en cache proposées dans cette thèse sont capables de réduire de plus que 50% la charge sur le cœur du réseau cellulaire. / The large diffusion of handheld devices is leading to an exponential growth of the mobile traffic demand which is already overloading the core network. To deal with such a problem, several works suggest to store content (files or videos) in small cells or user equipments. In this thesis, we push the idea of caching at the edge a step further, and we propose to use public or private transportation as mobile small cells and caches. In fact, vehicles are widespread in modern cities, and the majority of them could be readily equipped with network connectivity and storage. The adoption of such a mobile cloud, which does not suffer from energy constraints (compared to user equipments), reduces installation and maintenance costs (compared to small cells). In our work, a user can opportunistically download chunks of a requested content from nearby vehicles, and be redirected to the cellular network after a deadline (imposed by the operator) or when her playout buffer empties. The main goal of the work is to suggest to an operator how to optimally replicate content to minimize the load on the core network. The main contributions are: (i) Modelling. We model the above scenario considering heterogeneous content size, generic mobility and a number of other system parameters. (ii) Optimization. We formulate some optimization problems to calculate allocation policies under different models and constraints. (iii) Performance analysis. We build a MATLAB simulator to validate the theoretical findings through real trace-based simulations. We show that, even with low technology penetration, the proposed caching policies are able to offload more than 50 percent of the mobile traffic demand.
46

Délestage de données en D2D : de la modélisation à la mise en oeuvre / Device-to-device data Offloading : from model to implementation

Rebecchi, Filippo 18 September 2015 (has links)
Le trafic mobile global atteindra 24,3 exa-octets en 2019. Accueillir cette croissance dans les réseaux d’accès radio devient un véritable casse-tête. Nous porterons donc toute notre attention sur l'une des solutions à ce problème : le délestage (offloading) grâce à des communications de dispositif à dispositif (D2D). Notre première contribution est DROiD, une stratégie qui exploite la disponibilité de l'infrastructure cellulaire comme un canal de retour afin de suivre l'évolution de la diffusion d’un contenu. DROiD s’adapte au rythme de la diffusion, permettant d'économiser une quantité élevée de données cellulaires, même dans le cas de contraintes de réception très serrées. Ensuite, nous mettons l'accent sur les gains que les communications D2D pourraient apporter si elles étaient couplées avec les transmissions multicast. Par l’utilisation équilibrée d'un mix de multicast, et de communications D2D, nous pouvons améliorer, à la fois, l'efficacité spectrale ainsi que la charge du réseau. Afin de permettre l’adaptation aux conditions réelles, nous élaborons une stratégie d'apprentissage basée sur l'algorithme dit ‘’bandit manchot’’ pour identifier la meilleure combinaison de communications multicast et D2D. Enfin, nous mettrons en avant des modèles de coûts pour les opérateurs, désireux de récompenser les utilisateurs qui coopèrent dans le délestage D2D. Nous proposons, pour cela, de séparer la notion de seeders (utilisateurs qui transportent contenu, mais ne le distribuent pas) et de forwarders (utilisateurs qui sont chargés de distribuer le contenu). Avec l'aide d’un outil analytique basée sur le principe maximal de Pontryagin, nous développons une stratégie optimale de délestage. / Mobile data traffic is expected to reach 24.3 exabytes by 2019. Accommodating this growth in a traditional way would require major investments in the radio access network. In this thesis, we turn our attention to an unconventional solution: mobile data offloading through device-to-device (D2D) communications. Our first contribution is DROiD, an offloading strategy that exploits the availability of the cellular infrastructure as a feedback channel. DROiD adapts the injection strategy to the pace of the dissemination, resulting at the same time reactive and relatively simple, allowing to save a relevant amount of data traffic even in the case of tight delivery delay constraints.Then, we shift the focus to the gains that D2D communications could bring if coupled with multicast wireless networks. We demonstrate that by employing a wise balance of multicast and D2D communications we can improve both the spectral efficiency and the load in cellular networks. In order to let the network adapt to current conditions, we devise a learning strategy based on the multi-armed bandit algorithm to identify the best mix of multicast and D2D communications. Finally, we investigate the cost models for operators wanting to reward users who cooperate in D2D offloading. We propose separating the notion of seeders (users that carry content but do not distribute it) and forwarders (users that are tasked to distribute content). With the aid of the analytic framework based on Pontryagin's Maximum Principle, we develop an optimal offloading strategy. Results provide us with an insight on the interactions between seeders, forwarders, and the evolution of data dissemination.
47

Energy-Efficient and Secure Device-to-Device Communications in the Next-Generation Wireless Network

Ying, Daidong 28 August 2018 (has links)
No description available.
48

Computation Offloading for Real-Time Applications

Tahirović, Emina January 2023 (has links)
With the vast and ever-growing range of applications which have started to seek real-time data processing and timing-predictable services comes an extensive list of problems when trying to establish these applications in the real-time domain. Depending on the purpose of the real-time application, the requests that they impose on resources are vastly different. Some real-time applications require large computational power, large storage capacities, and large energy storage. However, not all devices can be equipped with processors, batteries, or power banks adequate for such resource requirements. While these issues can be mitigated by offloading computations using cloud computing, this is not a universal solution for all applications. Real-time applications need to be predictable and reliable, whereas the cloud can cause high and unpredictable latencies. One possible improvement in the predictability and reliability aspect comes from offloading to the edge, which is closer than the cloud and can reduce latencies. However, even the edge comes with certain limitations, and it is not exactly clear how, where and when applications should be offloaded. The problem then presents itself as: how should real-time applications in the-edge cloud architecture be modeled? Moreover, how should they be modeled to be agnostic from certain technologies and provide support for timing analysis? Another thing to consider is the question of 'when' to offload to the edge-cloud architecture. For example, critical computations can be offloaded to the edge, while less critical computations can be offloaded to the cloud, but before one can determine 'where' to offload, one must determine 'when'. Thus, this thesis focuses on designing a new technology-agnostic mathematical model to allow holistic modeling of real-time applications on the edge-cloud continuum and provide support for timing analysis.
49

CONTENT TRADING AND PRIVACY-AWARE PRICING FOR EFFICIENT SPECTRUM UTILIZATION

Alotaibi, Faisal F. January 2019 (has links)
No description available.
50

Computational Offloading for Sequentially Staged Tasks: A Dynamic Approach Demonstrated on Aerial Imagery Analysis

Veltri, Joshua 02 February 2018 (has links)
No description available.

Page generated in 0.0553 seconds