• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 20
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 97
  • 36
  • 32
  • 21
  • 20
  • 18
  • 17
  • 17
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Um serviço de offloading de dados contextuais com suporte à privacidade / A Contextual Data Offloading Service With Privacy Support

Gomes, Francisco Anderson de Almada January 2017 (has links)
GOMES, Francisco Anderson de Almada. Um serviço de offloading de dados contextuais com suporte à privacidade. 2017. 95 f. Dissertação (Mestrado em Ciência da Computação)- Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Jonatas Martins (jonatasmartins@lia.ufc.br) on 2017-05-26T13:43:05Z No. of bitstreams: 1 2017_dis_faagomes.pdf: 5747766 bytes, checksum: 678be7c6e0c8e999826aa6d7060bebb5 (MD5) / Approved for entry into archive by Jairo Viana (jairo@ufc.br) on 2017-05-26T17:04:18Z (GMT) No. of bitstreams: 1 2017_dis_faagomes.pdf: 5747766 bytes, checksum: 678be7c6e0c8e999826aa6d7060bebb5 (MD5) / Made available in DSpace on 2017-05-26T17:04:18Z (GMT). No. of bitstreams: 1 2017_dis_faagomes.pdf: 5747766 bytes, checksum: 678be7c6e0c8e999826aa6d7060bebb5 (MD5) Previous issue date: 2017 / Mobile devices became a common tool in our daily routine. Mobile applications are demanding access to contextual information increasingly. For instance, applications require user’s environment data as well as their profiles in order to adapt themselves (interfaces, services, content) according to this context data. Mobile applications with this behavior are known as context-aware applications. Several software infrastructures have been created to help the development of this applications. However, it was verified that most of them do not store history of the contextual data, since mobile devices are resource constrained. They are not built taking into account the privacy of contextual data either, due the fact that applications may expose contextual data without user consent. This dissertation addresses these topics by extending an existing middleware platform that help the development of mobile context-aware applications. This work present a service named COP (Contextual data Offloading service with Privacy Support) and is based in: (i) a context model, (ii) a privacy policy and (iii) synchronization policies. The COP aims to store and process the contextual data generated from several mobile devices, using the computational power of the cloud. To evaluate this work we developed an application that uses both the migration and the privacy mechanism of the contextual data of the COP. Other two experiments were made. The first experiment evaluated the impact of contextual filter processing in mobile device and remote environment, in which the processing time and energy consumption were measured. In this experiment was possible to conclude that the migration of data from mobile device to a remote environment is advantageous. The second experiment evaluated the energy consumption to send contextual data. / Dispositivos móveis tornaram-se uma ferramenta comum no dia a dia das pessoas. Aplicações móveis cada vez mais exigem o acesso às informações contextuais. Por exemplo, aplicações requerem os dados do ambiente do usuário, bem como dos seus perfis, a fim de se adaptarem (interfaces, serviços, conteúdo) de acordo com esses dados de contexto. Aplicações com esse comportamento são conhecidas como aplicações sensíveis ao contexto. Várias infraestruturas de software foram criadas para ajudar no desenvolvimento dessas aplicações. No entanto, foi verificado que a maioria delas não possui um histórico dos dados contextuais, uma vez que os dispositivos móveis são limitados em recursos de armazenamento. Também foi verificado que a maioria delas não é construída levando em conta a privacidade dos dados contextuais, o que pode levar à exposição desses dados sem o consentimento do usuário. Esta dissertação aborda tais tópicos, estendendo uma plataforma de middleware existente que ajuda o desenvolvimento de aplicativos móveis e sensíveis ao contexto. Este trabalho apresenta um serviço denominado COP (Contextual data Offloading service with Privacy Support) e é baseado em: (i) um modelo de contexto, (ii) uma política de privacidade e (iii) em políticas de sincronização de dados. O COP visa armazenar e processar os dados contextuais gerados a partir de vários dispositivos móveis, utilizando o poder computacional da nuvem. Para avaliar este trabalho foi desenvolvida uma aplicação que utiliza tanto a migração como o mecanismo de privacidade dos dados contextuais do COP. Outros dois experimentos foram feitos. O primeiro experimento avaliou o impacto da execução de filtros contextuais no dispositivo móvel e no ambiente remoto, em que foi medido o tempo e gasto energético desse processamento. Nesse experimento foi possível concluir que a migração de dados de um dispositivo móvel para um ambiente remoto é vantajosa. O segundo experimento avaliou o gasto energético para o envio dos dados contextuais.
42

Mobile data offloading via urban public transportation networks / Données mobiles délestant sur les réseaux de transports publics urbains

Su, Qiankun 19 May 2017 (has links)
La popularité des plateformes mobiles telles que smartphones et tablettes génère un volume croissant de données à transférer. La principale raison de cette croissance est l'accès simplifié aux contenus vidéo sur ces plateformes. La future génération (5G) de téléphonie mobile est en cours de développement et a pour objectif d'offrir une bande passante suffisante pour de tels volumes de données. Néanmoins, un déploiement en masse de la 5G n'est pas envisagé avant 2020. De plus, la croissance est telle qu'il sera forcément intéressant de développer des solutions alternatives et complémentaires capables de délester le réseau cellulaire. L'exemple actuel le plus représentatif est le délestage de données cellulaires vers des réseaux d'accès WiFi par les principaux opérateurs mobiles. Dans ce contexte, nous proposons de déployer un nouveau réseau de contenus qui s'appuie sur les réseaux de transports publics urbains. Cette solution déploie des bornes sans-fil dans les bus et sur certaines stations de bus pour offrir du contenu aux passagers des bus. Les bus enregistrent et transportent les données, et se comportent donc comme des mules qui peuvent s'échanger des données dans certaines stations de bus. L'ensemble des bus créé un réseau de transport de données tolérantes au délai telles que de la vidéo à la demande. La création d'un tel réseau soulève de nombreuses questions. Les questions traitées dans les trois parties de cette thèse sont les suivantes: (i) le choix des stations de bus sur lesquelles une borne sans-fil doit être déployée, (ii) le choix du protocole de routage des données, (iii) la gestion efficace de la contention dans les stations et enfin (iv) la réduction du coût d'une telle infrastructure. La première partie de la thèse présente notre réseau de contenu dont l'objectif principal est de transporter de larges volumes de données. Nous montrons pour cela qu'il suffit de déployer des bornes sans-fil aux terminus des lignes de bus. Ce résultat provient de l'analyse des réseaux de transports publics des villes de Toulouse, Helsinki et Paris. Connaissant les horaires et la topologie de ces réseaux de transports, nous proposons de pré-calculer les routes pour transmettre les données dans ce réseau. Nous montrons que ce routage statique permet de réduire drastiquement le nombre de réplications de messages quand on le compare à un routage épidémique. La seconde contribution de cette thèse s'intéresse à l'échange des messages au niveau des bornes sans-fil déployées aux terminus des lignes de bus. En effet, les protocoles d'accès actuels partagent équitablement la bande passante entre les bus et le point d'accès. Dans notre cas, il en résulte une congestion importante que nous proposons de résoudre en introduisant un codage réseau XOR de proche en proche. Les flux qui se croisent sont alors combinés par la borne. Les bus transportent des paquets codés qui seront décodés au prochain saut par la borne suivante. Une analyse théorique de ce mode de communication montre que la probabilité de réception des messages peut-être augmentée au maximum de 50% et la surcharge diminuée au maximum de 50%. Pour les 3 villes européennes considérées, nous montrons par simulation que ce protocole permet d'augmenter de 35% à 48% le nombre de messages reçus. La dernière partie de cette thèse a pour objectif de réduire le coût de déploiement d'une telle architecture. Elle classifie les terminus des lignes de bus en trois ensembles qui sont équipés par des bornes sans fil de nature différentes. Les résultats de simulation montrent que pour les trois villes il est possible de garantir la connectivité de bout-en-bout tout en réduisant les coûts de déploiement d'un facteur 3. Cette architecture, dénommée 3-tier, transporte 30% plus de messages que le déploiement basique proposé en première partie. Nous montrons qu'il est possible de décharger un grand volume de données avec notre architecture. Par exemple, pour Paris, notre architecture permet de / Mobile data traffic is increasing at an exponential rate with the proliferation of mobile devices and easy access to large contents such as video. Traffic demand is expected to soar in the next 5 years and a new generation of mobile networks (5G) is currently being developed to address the looming bandwidth crunch. However, significant 5G deployments are not expected until 2020 or even beyond. As such, any solution that offloads cellular traffic to other available networks is of high interest, the main example being the successful offloading of cellular traffic onto WiFi. In this context, we propose to leverage public transportation networks (PTNs) created by regular bus lines in urban centers to create another offloading option for delay tolerant data such as video on demand. This PhD proposes a novel content delivery infrastructure where wireless access points (APs) are installed on both bus stops and buses. Buses act as data mules, creating a delay tolerant network capable of carrying content users can access while commuting using public transportation. Building such a network raises several core challenges such as: (i) selecting the bus stops on which it is best to install APs, (ii) efficiently routing the data, (iii) relieving congestion points in major hubs and (iv) minimizing the cost of the full architecture. These challenges are addressed in the three parts of this thesis. The first part of the thesis presents our content delivery infrastructure whose primary aim is to carry large volumes of data. We show that it is beneficial to install APs at the end stations of bus lines by analyzing the publicly available time tables of PTN providers of different cities. Knowing the underlying topology and schedule of PTNs, we propose to pre-calculate static routes between stations. This leads to a dramatic decrease in message replications and transfers compared to the state-of-the-art Epidemic delay tolerant protocol. Simulation results for three cities demonstrate that our routing policy increases by 4 to 8 times the number of delivered messages while reducing the overhead ratio. The second part of the thesis addresses the problem of relieving congestion at stations where several bus lines converge and have to exchange data through the AP. The solution proposed leverages XOR network coding where encoding and decoding are performed hop-by-hop for flows crossing at an AP. We conduct a theoretical analysis of the delivery probability and overhead ratio for a general setting. This analysis indicates that the maximum delivery probability is increased by 50% while the overhead ratio is reduced by 50%, if such network coding is applied. Simulations of this general setting corroborate these points, showing, in addition, that the average delay is reduced as well. Introducing our XOR network coding to our content delivery infrastructure using real bus timetables, we demonstrate a 35% - 48% improvement in the number of messages delivered. The third part of the thesis proposes a cost-effective architecture. It classifies PTN bus stops into three categories, each equipped with different types of wireless APs, allowing for a fine-grained cost control. Simulation results demonstrate the viability of our design choices. In particular, the 3-Tier architecture is shown to guarantee end-to-end connectivity and reduce the deployment cost by a factor of 3 while delivering 30% more packets than a baseline architecture. It can offload a large amount of mobile data, as for instance 4.7 terabytes within 12 hours in the Paris topology.
43

Extending the battery life of mobile device by computation offloading

Qian, Hao January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Daniel A. Andresen / The need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance.
44

The Reading of Rotated Text - An Embodied Account

January 2013 (has links)
abstract: Individuals engaged in perceptual tasks often use their bodies to lighten the cognitive load, that is, they replace internal (mental) processing with external (body-based) processing. The present investigation explores how the body is used in the task of reading rotated text. The experimental design allowed the participants to exhibit spontaneous behavior and choose what strategies to use in order to efficiently complete the task. The results demonstrate that the use of external strategies can benefit performance by offloading internal processing. / Dissertation/Thesis / M.S. Psychology 2013
45

Offloading Virtual Network Functions – Hierarchical Approach

Langlet, Jonatan January 2020 (has links)
Next generation mobile networks are designed to run in a virtualized environment, enabling rapid infrastructure deployment and high flexibility for coping with increasing traffic demands and new service requirements. Such network function virtualization imposes additional packet latencies and potential bottlenecks not present in legacy network equipment when run on dedicated hardware; such bottlenecks include PCIe transfer delays, virtualization overhead, and utilizing commodity server hardware which is not optimized for packet processing operations.Through recent developments in P4 programmable networking devices, it is possible to implement complex packet processing pipelines directly in the network data plane; allowing critical traffic flows to be offloaded and flexibly hardware accelerated on new programmable packet processing hardware, prior to entering the virtualized environment.In this thesis, we design and implement a novel hybrid NFV processing architecture which integrates programmable NICs and commodity server hardware, capable of offloading virtual network functions for specified traffic flows directly to the server network card; allowing these flows to completely bypass softwarization overhead, while less sensitive traffic process on the underlying host server.An evaluation in a testbed with customized traffic generators show that accelerated flows have significantly lower jitter and latency, compared with flows processed on commodity server hardware. Our evaluation gives important insights into the designs of such hardware accelerated virtual network deployments, showing that hybrid network architectures are a viable solution for enabling infrastructure scalability without sacrificing critical flow performance.
46

Distributed Orchestration Framework for Fog Computing

Rahafrouz, Amir January 2019 (has links)
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
47

Les véhicules comme un mobile cloud : modélisation, optimisation et analyse des performances / Vehicles as a mobile cloud : modelling, optimization and performance analysis

Vigneri, Luigi 11 July 2017 (has links)
La prolifération des appareils portables mène à une croissance du trafic mobile qui provoque une surcharge du cœur du réseau cellulaire. Pour faire face à un tel problème, plusieurs travaux conseillent de stocker les contenus (fichiers et vidéos) dans les small cells. Dans cette thèse, nous proposons d'utiliser les véhicules comme des small cells mobiles et de cacher les contenus à bord, motivés par le fait que la plupart d'entre eux pourra facilement être équipée avec de la connectivité et du stockage. L'adoption d'un tel cloud mobile réduit les coûts d'installation et de maintenance et présente des contraintes énergétiques moins strictes que pour les small cells fixes. Dans notre modèle, un utilisateur demande des morceaux d'un contenu aux véhicules voisins et est redirigé vers le réseau cellulaire après une deadline ou lorsque son playout buffer est vide. L'objectif du travail est de suggérer à un opérateur comment répliquer de manière optimale les contenus afin de minimiser le trafic mobile dans le cœur du réseau. Les principales contributions sont : (i) Modélisation. Nous modélisons le scénario ci-dessus en tenant compte de la taille des contenus, de la mobilité et d'un certain nombre d'autres paramètres. (ii) Optimisation. Nous formulons des problèmes d'optimisation pour calculer les politiques d'allocation sous différents modèles et contraintes. (iii) Analyse des performances. Nous développons un simulateur MATLAB pour valider les résultats théoriques. Nous montrons que les politiques de mise en cache proposées dans cette thèse sont capables de réduire de plus que 50% la charge sur le cœur du réseau cellulaire. / The large diffusion of handheld devices is leading to an exponential growth of the mobile traffic demand which is already overloading the core network. To deal with such a problem, several works suggest to store content (files or videos) in small cells or user equipments. In this thesis, we push the idea of caching at the edge a step further, and we propose to use public or private transportation as mobile small cells and caches. In fact, vehicles are widespread in modern cities, and the majority of them could be readily equipped with network connectivity and storage. The adoption of such a mobile cloud, which does not suffer from energy constraints (compared to user equipments), reduces installation and maintenance costs (compared to small cells). In our work, a user can opportunistically download chunks of a requested content from nearby vehicles, and be redirected to the cellular network after a deadline (imposed by the operator) or when her playout buffer empties. The main goal of the work is to suggest to an operator how to optimally replicate content to minimize the load on the core network. The main contributions are: (i) Modelling. We model the above scenario considering heterogeneous content size, generic mobility and a number of other system parameters. (ii) Optimization. We formulate some optimization problems to calculate allocation policies under different models and constraints. (iii) Performance analysis. We build a MATLAB simulator to validate the theoretical findings through real trace-based simulations. We show that, even with low technology penetration, the proposed caching policies are able to offload more than 50 percent of the mobile traffic demand.
48

Délestage de données en D2D : de la modélisation à la mise en oeuvre / Device-to-device data Offloading : from model to implementation

Rebecchi, Filippo 18 September 2015 (has links)
Le trafic mobile global atteindra 24,3 exa-octets en 2019. Accueillir cette croissance dans les réseaux d’accès radio devient un véritable casse-tête. Nous porterons donc toute notre attention sur l'une des solutions à ce problème : le délestage (offloading) grâce à des communications de dispositif à dispositif (D2D). Notre première contribution est DROiD, une stratégie qui exploite la disponibilité de l'infrastructure cellulaire comme un canal de retour afin de suivre l'évolution de la diffusion d’un contenu. DROiD s’adapte au rythme de la diffusion, permettant d'économiser une quantité élevée de données cellulaires, même dans le cas de contraintes de réception très serrées. Ensuite, nous mettons l'accent sur les gains que les communications D2D pourraient apporter si elles étaient couplées avec les transmissions multicast. Par l’utilisation équilibrée d'un mix de multicast, et de communications D2D, nous pouvons améliorer, à la fois, l'efficacité spectrale ainsi que la charge du réseau. Afin de permettre l’adaptation aux conditions réelles, nous élaborons une stratégie d'apprentissage basée sur l'algorithme dit ‘’bandit manchot’’ pour identifier la meilleure combinaison de communications multicast et D2D. Enfin, nous mettrons en avant des modèles de coûts pour les opérateurs, désireux de récompenser les utilisateurs qui coopèrent dans le délestage D2D. Nous proposons, pour cela, de séparer la notion de seeders (utilisateurs qui transportent contenu, mais ne le distribuent pas) et de forwarders (utilisateurs qui sont chargés de distribuer le contenu). Avec l'aide d’un outil analytique basée sur le principe maximal de Pontryagin, nous développons une stratégie optimale de délestage. / Mobile data traffic is expected to reach 24.3 exabytes by 2019. Accommodating this growth in a traditional way would require major investments in the radio access network. In this thesis, we turn our attention to an unconventional solution: mobile data offloading through device-to-device (D2D) communications. Our first contribution is DROiD, an offloading strategy that exploits the availability of the cellular infrastructure as a feedback channel. DROiD adapts the injection strategy to the pace of the dissemination, resulting at the same time reactive and relatively simple, allowing to save a relevant amount of data traffic even in the case of tight delivery delay constraints.Then, we shift the focus to the gains that D2D communications could bring if coupled with multicast wireless networks. We demonstrate that by employing a wise balance of multicast and D2D communications we can improve both the spectral efficiency and the load in cellular networks. In order to let the network adapt to current conditions, we devise a learning strategy based on the multi-armed bandit algorithm to identify the best mix of multicast and D2D communications. Finally, we investigate the cost models for operators wanting to reward users who cooperate in D2D offloading. We propose separating the notion of seeders (users that carry content but do not distribute it) and forwarders (users that are tasked to distribute content). With the aid of the analytic framework based on Pontryagin's Maximum Principle, we develop an optimal offloading strategy. Results provide us with an insight on the interactions between seeders, forwarders, and the evolution of data dissemination.
49

Energy-Efficient and Secure Device-to-Device Communications in the Next-Generation Wireless Network

Ying, Daidong 28 August 2018 (has links)
No description available.
50

Computation Offloading for Real-Time Applications

Tahirović, Emina January 2023 (has links)
With the vast and ever-growing range of applications which have started to seek real-time data processing and timing-predictable services comes an extensive list of problems when trying to establish these applications in the real-time domain. Depending on the purpose of the real-time application, the requests that they impose on resources are vastly different. Some real-time applications require large computational power, large storage capacities, and large energy storage. However, not all devices can be equipped with processors, batteries, or power banks adequate for such resource requirements. While these issues can be mitigated by offloading computations using cloud computing, this is not a universal solution for all applications. Real-time applications need to be predictable and reliable, whereas the cloud can cause high and unpredictable latencies. One possible improvement in the predictability and reliability aspect comes from offloading to the edge, which is closer than the cloud and can reduce latencies. However, even the edge comes with certain limitations, and it is not exactly clear how, where and when applications should be offloaded. The problem then presents itself as: how should real-time applications in the-edge cloud architecture be modeled? Moreover, how should they be modeled to be agnostic from certain technologies and provide support for timing analysis? Another thing to consider is the question of 'when' to offload to the edge-cloud architecture. For example, critical computations can be offloaded to the edge, while less critical computations can be offloaded to the cloud, but before one can determine 'where' to offload, one must determine 'when'. Thus, this thesis focuses on designing a new technology-agnostic mathematical model to allow holistic modeling of real-time applications on the edge-cloud continuum and provide support for timing analysis.

Page generated in 0.0733 seconds